tag:blogger.com,1999:blog-283208512023-01-30T02:24:05.670-08:00Uncut blocks of woodBurak Emirhttp://www.blogger.com/profile/01651698653168460072noreply@blogger.comBlogger48125tag:blogger.com,1999:blog-28320851.post-43907148071924215192022-09-11T09:36:00.002-07:002022-09-12T06:10:00.298-07:00Cyber Ontology Stamp Collection<p>Just a short and shallow note about knowledge representation, specifically in the domain of cybersecurity. As usual, I am the sole owner of my opinions. <blockquote class="twitter-tweet"><p lang="en" dir="ltr">I'd want to know how shared conceptualization can be leveraged to more effectively prevent and defend against cyber attacks.</p>— Burak Emir (@burakemir) <a href="https://twitter.com/burakemir/status/1500110232114106381?ref_src=twsrc%5Etfw">March 5, 2022</a></blockquote> <script async src="https://platform.twitter.com/widgets.js" charset="utf-8"></script> <h2>Ontology and Knowledge Graphs</h2> <i>Ontology</i> is the word you use when you get tired of saying "machine-readable representation of concepts" too many ways. The machine-readable part is important: over the decades, everyone working in large organizations runs into the need for modeling <i>domains</i> in software, but software devs need to work with crisp, stable definitions. The people working in the domain (the <i>domain experts</i>) do not have ready and sometimes keep deliberately flexible. So defining an ontology is not a philosophical exercise in the nature of things, but actually tied to the goal of helping the domain experts do their work. <p>An ontology is like a controlled vocabulary. As software devs know, there is rarely a universally accepted standard machine-readable representation. Everyone defines their own databases, schema definitions etc. Therefore, ontology enters the picture when there are multiple (typically large) independent organizations and we are interested in exchanging information. Unlike XML or file formats, ontology classes of objects (entities) and binary relations. This comes from description logic as well as semantic web / RDF which I won't go into. <p>The essential thing is that in the description logic / semantic web / etc approach, knowledge representation can be separated into <i>terminology definitions</i> (TBox) and <i>assertions</i>/statements (ABox). The terminology definitions are similar to a schema or object model and definewhat kind of objects and relations we can talk about. For instance, we may want to talk about <code>blog, author, post</code> and connections such as <code>isOwner, wrote, linksTo</code>. The assertion "Author Burak owns Blog Uncut Blocks of Wood" can then use these definitions. <p>Of course none of this really works really well. Barry Smith, an expert on ontology, in the first 10 mins of his entertaining talk <a href="https://www.youtube.com/watch?v=p0buEjR3t8A">Basic Formal Ontology</a> (BFO), where he goes "people go and create thousands of ontologies and reproduced the silos that ontologies were supposed to overcome". This is reality. <h2>Cybersecurity: STIX and ATT&CK</h2> <p>Let's shift gears and pick an interesting domain now, one where we really all are interested in progress. Would it not be nice if the various words that people in charge of cybersecurityare throwing around would be organized in a consistent manner and actionable, useful information ("intelligence") shared between organizations? Would this exchange information, and rapidly so, and realize the dream of having good actors build a united front against the bad guys? <p>Let's not get ahead of ourselves. If you listened to Barry Smith, using ontologies is not the silver bullet. It is nevertheless interesting to see what organizations are throwing out there. And if you are here for the knowledge representation, you can still see cybersecurity as an interesting case study of how these things can play out in practice. <p>When we talk about controlled vocabulary, we are talking about a form of standardization. Richard Struse (Homeland Security, MITRE, Oasis, ...) saw the need to and opportunity for defining a controlled vocabulary that goes by the name Structured Threat Information Expression (STIX™). The MITRE ATT&CK "framework" is a collection of objects describing attacker behavior. <p>STIX is defining the terminology, the objects of interest. <p>For the technically minded, there are <a href="https://github.com/oasis-open/cti-stix2-json-schemas/tree/master/schemas/sdos">JSON Schemas for the domain objects</a> and likewise for the relations. It is an exercise to build code that reads these things. There is also a protocol called TAXII for realizing the information exchange. <h2>STIX Objects</h2> We have 19 <a href="https://docs.oasis-open.org/cti/stix/v2.1/os/stix-v2.1-os.html#_nrhq5e9nylke">domain objects</a>, and relations between them. I will not list them all, but give an example of the "targets" relation: Let's consider a statement: The attack-pattern 'SQL Injection' targets the identity 'domain administrators'. <ul> <li>We have objects of type <a href="https://docs.oasis-open.org/cti/stix/v2.1/os/stix-v2.1-os.html#_axjijf603msy"><b>attack pattern</b></a> <li>We have objects of type <code>identity</code> (more on this later) <li>We can have a relation (directed edge) of type <code>targets</code> between an attack-pattern object and an identity object. </ul> <p>You get the idea. There are various relationships, and in the end we can imagine a graph with objects being the nodes and the relations being the edges. <h2>MITRE ATT&CK</h2> <p>As mentioned: MITRE ATT&CK is a knowledge base that makes use of the STIX vocabulary. It is a collection of statements that uses the terminology. It is called a knowledge base because unlike relational database, the knowledge and added value comes from the connections. <p>It seems that the cybersecurity industry picked up MITRE ATT&CK as a buzzword to signal features/capability, something to enhance credibility of cyber services and products. It is very interesting that the vocabulary definitions - the ontology - is not something that generated as much buzz. To some extent, this is expected: the domain model is something left to domain experts, and the service industry sells the expertise and need not burden the customers with "the details". If you want to get into cybersecurity, you have the option of learning from the ATT&CK "framework", but also to pick up the lingo from the underlying interchange format. It is not meant as a good resource for learning for beginners, but the open source style availability of all this is clearly a good thing. <h2>Socio-technical limits of knowledge exchange</h2> <p>This approach to knowledge graph is entirely standard, but an obvious thing to point out is that it is very limiting that relations are binary. But since it is standard in these approaches to ontologies, let's not get hung up on this.</p> <p>We have a shared/standardized vocabulary, what is not to like? Why did we not crush the bad guys yet? There are some practical obstacles to "information sharing" in general, as Phil Venables lays out in a recent post <a href="https://www.philvenables.com/post/crucial-questions-from-governments-and-regulators">Crucial Questions from Governments and Regulators</a>. There is clearly more obstacles such as trust, but no need to go further. <p>Let's instead narrow this down to organizations that are capable of doing their own software and who are committed to keeping threat intel. We get to more specific socio-technical obstacles. This shines new light on the silo problem already mentioned above.</p> <p><b>Challenging the vocabulary.</b> Take the <code>identity</code> object type. The docs say "Identities can represent actual individuals, organizations, or groups (e.g., ACME, Inc.) as well as classes of individuals, organizations, systems or groups (e.g., the finance sector)." What if I want to organize my threat intelligence that doesn't lump them all together? <p>Reasonable arguments can be made to defend this choice: that a more precise ontology that distinguishes between groups of people or roles is beyond the scope of STIX. That this is for exchange in the end and therefore some kind of mapping is to be expected. However, it looks like there are some real limits that are not easily overcome by any machine-readable representation. <p><b>Are the statements useful?</b> In the end, all this information is for people. People who may work with different capabilities, different models, different ideas on granularity and naming. What if the organization supposed to consume this information cannot do anything with the attack pattern "SQL Injection"? Or, they could very much but chooise to not represent SQL Injection as its own thing, because it is useless for their purposes? Every organization is subject to their own boundaries. A list of attack patterns is not by itself a systematic approach: it is literally just a list of stuff. <p>What are the processes that will make use of this information? In any case, it seems that adaptation, mapping and possibly human curation is necessary if one wants the exchange information to have practical benefits. <h2>Detection</h2> <p>There are other types of information that can be shared. For example, here are collections of <i>detection rules</i>: <ul> <li>Sigma signatures <a href="https://github.com/SigmaHQ/sigma/wiki/Specification">spec</a> <li>Chronicle <a href="https://github.com/chronicle/detection-rules">Detection Rules</a> <li>Elastic Security <a href="https://github.com/elastic/detection-rules">detection rules</a> <li>Splunk security <a href="https://research.splunk.com/detections/">detection rules</a> </ul> <p>At least here, it is very clear what the process is: enhance security monitoring so that it covers the relevant (!) part of these open source repositories. Once again: not everything in there might be relevant. The same curation/adaption may be necessary, and maybe mapping the log formats and internal representation so these rules make sense. However, due to the narrower scope of detection, it is immediately clear what one would use these things for. <h2>Wrap up. Socrates strikes.</h2> <p>I promised a shallow post, and here we are. Knowledge representation is exciting technical challenge, and one that is relevant to many industries and it is happening, picked up in many large organizations (e.g. the use of ontology in life sciences, genetics, or the military). Alas, we can also see how using ontology as a domain model comes with limitations: relations are binary, and representing knowledge has to be tied to some actual human process in order to be useful. <p>I will end with this Socrates quote (source <a href="https://newlearningonline.com/literacies/chapter-1/socrates-on-the-forgetfulness-that-comes-with-writing">Socrates on the forgetfulness that comes with writing</a>): <blockquote>Well, then, those who think they can leave written instructions for an art, as well as those who accept them, thinking that writing can yield results that are clear or certain, must be quite naive and truly ignorant of [Thamos’] prophetic judgment: otherwise, how could they possibly think that words that have been written down can do more than remind those who already know what the writing is about?</blockquote> <p>I can get excited about information sharing when it comes to education, "speaking the same language." When it comes to practical cybersecurity processes, the usefulness of machine-readable knowledge seems to stand and fall with the tools and how they support the people who need to make use of this information. I have doubts that ontology can serve as the defining elements of a community or movement of the good guys. However, it also seems better than nothing in a field like cybersecurity that needs to deal with a fast-changing environment of evolving technology and adversaries. Exchanging threat intelligence with STIX may help those who "already know what the writing is about," and after due mapping, adaptation, curation to their own model, serve as one step towards coordinated response.Burak Emirhttp://www.blogger.com/profile/01651698653168460072noreply@blogger.com0tag:blogger.com,1999:blog-28320851.post-79191819308733089362022-06-07T02:51:00.001-07:002022-06-07T02:51:39.304-07:00There is only stream processing.<p>This post is about what to call stream processing. Strangely enough, it has little do with terminology, but almost all with perspective. <p>If you are pressed for time (ha!), the conclusion is that "stream processing vs batch processing" is a false dichotomy. Batch processing is merely an implementation of stream processing. Every time people use these as opposite, they are making the wrong abstraction. <h2>Can a reasonable definition be general?</h2> <p>A reasonable definition of data stream is a sequence of data chunks, where the length of the sequence is either "large" or unbounded. <p>This is a pretty general definition, esp if we leave open what "large" means. <ul> <li>Anything transmitted over TCP, say a file or HTTP response <li>a Twitter feed <li>system or application logs, say metadata from any user's visit to this blog <li>the URLs of websites you visit <li>location data acquired from your mobile phone <li>personal details of children who start school <i>every year</i></ul> <p>It also stands to reason that stream processing is processing of a data stream. A data stream is data that is <i>separated in time</i>.</p> <p>When a definition is that general, engineers get suspicious. Should it not make a difference whether a chunk of data arrives every <i>year</i> or every millisecond? Every conversation about stream <i>processing</i> happens in a context where we know a lot more about the temporal nature of the sequence. <p>The same engineers, when dealing with distributed systems, have no problem refering to large data sets as a whole, even if they fully know that it may be distributed over many machines. We may well call a large data set data that is <i>separated in space</i>. A data stream could also be both, separated in time and space (maybe we could call that a "distributed data stream"). <h2>At the warehouse</h2> <p>Where am I going with this? Let's quickly establish that "batch processing" is but a form of stream processing, in the above definition.</p> <p>What do people call <i>batch processing</i>? The etymology of this goes back to punch cards, but it is not about those. <p>Batch processing is frequently found in data warehousing, or extract-transform-load (ETL) process. This is a setup that has been around since the 1970s, and this does not make it bad. What is essential is that data is <i>periodically</i> ingested (extract), say in the form of large files. It is then turned (transform) into a uniform representation that is suitable for various kinds of querying (load). <p>Accumulating data and processing it periodically, we have seen this before. Does the data fit the general definition of data stream? Surely, since there is a notion of new data is coming in. </p> <p>What could be alternative terminology? Tyler Akidau uses the word "batch engine" in this <a href="https://www.oreilly.com/radar/the-world-beyond-batch-streaming-101/">post on stream processing</a>. So the good old periodic processing could be called "stream processing done with a batch engine." <p>I promised that this is not only about terminology. When did people first feel the need to distinguish batch processing from stream processing? <h2>Data stream management systems</h2> <p>The people who systematically needed to distinguish processing of data streams from simply large data sets were the database community. Dear academics, I don't mean to hurt any feeling but I will just count all papers on "data stream processing" or "complex event processing" as database community. <p>The database researchers implemented efficient evaluation of a (reasonaly) standard, high level, declarative language: relational queries (SQL). Efficient evaluation and performance meant to make best use of available, bounded resources. As part of this journey, architectures appeared that look very similar to what streams of partial results (such Volcano, or iterator model). <center> <img src="//burakemir.ch/relational-query.png"> </center> <p>Take a look the relational query (SELECT becomes $\pi$, WHERE becomes $\sigma$, JOIN becomes $\Join$). What would happen if we flipped the direction of the arrows? Instead of bounded data like the "Employee" and "Building" tables, we could have continuous streams that go through the machinery.</p> <p><i>Exercise:</i> draw the diagram above, with all arrows reversed, and think about how this could be a useful stream processing system. Maybe there is a setup procedure that has to happen in each building before an engineer starts working from there and they would like to know about arrivals and new departures. <p>Data stream management system (DSMS) became a thing 20 years ago when implementors realized that most of their stuff will continue to work when tuples come in continuously, <i>mutatis mutandis</i>. <ul> <li>Michael Stonebraker, of PostgreSQL fame, built a system called Aurora and founded a company StreamBase systems <li>At Stanford they wrote about the <a href="http://infolab.stanford.edu/stream/">Stanford Stream Data Management System</a> and Continuous Query Language (CQL) <li>(I'm not going to do a survey here) </ul> <p>Academic authors will delineate their field using descriptions as follows. <ul> <li>"high" rate of updates <li>requirement to provide results in "real time" (bounded time) <li>limited resources (e.g. memory)</li> </ul> <p>We are now getting closer to the more specific meaning people attach to "stream processing:" not only do we receive the data in chunks at different times, we also need to produce results in a "short" amount of time, or with "bounded" machine resources. <p>In order to understand why database folks streaming, one shoud know that evaluating queries in DBMS with high performance is also a form a stream processing. When a user types a SQL query at the prompt (<code>SELECT ... FROM ...</code>, the all-caps giving that distinctive pre-history feel), the result should come in as quickly as possible, and there are surely some limits on the machine resources. So generalizing query evaluation to continuosly arriving data really was a logical next step to overcome limitations. <h2>Interlude: Windowing and micro-batching</h2> <p>If you did the exercise, you may wonder how a join is supposed to work when we take a streaming perspective? There are multiple answers. <p>A particular nice and useful scenario is if the joins can be considered as a simple lookups, for example when each row in a data stream is enriched with something that is looked up via a service. <p>Another scenario is the <i>windowed stream join</i>: here we consider bounded segments (windows) of two data stream and join what we find in those. Usually, this requires some guarantees: the streams are <i>a priori</i> not synchronous. They may either be <i>synchronous enough</i> or one may use some amount of buffering and periodically process what is in the buffer. <p>Wait - did I just write "periodic processing"? That is right, so it looks like when stream processing is hardcore enough, it contains periodic processing again. This is where usually people will say things like "micro-batch." There are simply scenarios in stream processing that cannot be done without periodic processing (everything that involves windows in processing time). <h2>A horizontal argument</h2> <p>Now, the words "limited machine resources" meant something different 20 years ago. We can (and do) build systems that involves machines and communications, sharding or "horizontal scaling". From the good old MapReduce to NoSQL and Craig Chambers FlumeJava (which lives happily on as Apache Beam) to Spark and Cloud Dataflow, there is a series of systems, APIs that deal with stream processing in a distributed manner. <h2>Tying the knot</h2> <p>The irony of talking about "batch processing" today is that periodic processing of large files, also involves distributed stream processing underneath. When a large input data set is processed with a MapReduce, it is distributed across a set of workers that map-phase locally produce partial result. The <i>shuffle</i> phase then takes care of getting all partial results to the right place for the reduce-phase. The "separation in space" is also a "separation in time:" a particular partition can be "done" before others, which means that results is a result stream. <p>Depending on the level at which one is discussing a system, the performance expectations we have, the trade-offs, one may be able to ignore spatial and temporal separation. It seems that the recognizing the temporal separation always brings advantages.</p> Burak Emirhttp://www.blogger.com/profile/01651698653168460072noreply@blogger.com2tag:blogger.com,1999:blog-28320851.post-55860599179224738532022-04-08T06:28:00.010-07:002022-09-11T10:00:36.749-07:00Polaroids from Swiss Cyber Security Days SCSD Day 1Here are a few things I took home from Day 1 of the <a href="https://swisscybersecuritydays.ch/">Swiss Cyber Security Days</a> conference (SCSD) which was two days ago. I work as a software engineer and am used to deal with technical angles, but cyber security is one of the areas where technological change impacts society at large. I made the decision to go shortly after the Russia-Ukraine war broke out; if cyber security was a pressing problem before, the new political era should make it unavoidable for society to ignore and I wanted to find out first-hand what that means. <p>(Note: the title is a kind of metaphor that is resolved at the end. The public domain images I include here are random things from the interwebs, not actually from the conference.)</p> <h1>Switzerland Officials and National Councillors</h1> <p>Day 1 was opened according to official protocol by Doris Fiala, member of Swiss National Council, member of the parliamentary Security Policy Committee, and president of SCSD. She greeted many of the present officials and parliamentary members. The morning featured Swiss officials inlcuding high ranking ones who are directly responsible for dealing with cyber security. <p>The speakers provided perspectives on Swiss national policy, the role of federal government and the armed forces. The officials spoke in national languages German and French often switching from one to the other in the same talk, and there was simultaneous translation (like in UN conferences.) A few highlights: <p><b>Florian Schütz</b> has been acting as Swiss federal Cyber Security Delegate since 2019 and reported on strategy and outlook. He talked about the activities of the <a href="https://www.ncsc.admin.ch/ncsc/en/home.html">National Cyber Security Center NCSC</a>, and how it relates to the Financial Sector security center which was <a href="https://www.ncsc.admin.ch/ncsc/en/home/dokumentation/medienmitteilungen/newslist.msg-id-87892.html">founded this week</a> in Zurich. It is a fact that cyber security means something different in every sector and that a federal center would be limited in depth, which can be addressed by creating and support sector-specific cyber security centers where decision makers from companies in that sector can participate directly. He also pointed out that most companies in Switzerland are SMB, and even if many are aware of the problem now, they do no know what concretely to do about the risk. <p>The majority of Swiss companies have less than 0.5M CHF revenue per year of which they might be able to set aside 3-5% for cyber security. This is not a lot! Getting companies to take the risk seriously and improve their security posture through awareness and training was picked up by other speakers. <p>The challenge for official institutions is that they have in general not been made aware of cyber security incidents, since companies are not incentivized to report breaches. There is a current proposal to make reporting mandatory, which would help improve visibility. <p>Major General <b>Alain Vuitel</b> is director of the Project Cyber Command of the Swiss Armed Forces, and talked about the process of establishing Cyber Command. This is a signficant step, since the army is structured into four existing commands and the Cyber Command will become the fifth one. Vuitel pointed out the importance of information in warfare and that this is a factor Ukraine, and the civil society and collateral damage from attack on communication infrastructure like satellites and disinformation campaigns. <p>The cyber attack on KA-SAT satellite communications network on Feb 24th, the day Russia attacked Ukraine, led to loss of maintenance capabilities for a German operator of wind turbines. Also in the weeks leading up to the outbreak of war, there were cyber attacks on Ukraining infrastructure. War is no longer armed conflict; covert operations and information war mean that in times of "peace", there are attacks going on and this accepted fact was observed in real-time. Vuitel also pointed out that modern communication also bring a new potential: every citizen with a mobile phone can act as a sensor, providing information, provided they have the freedom to act, which underlines the importance of communication infrastructure. <p><img src="https://cdn2.picryl.com/photo/2020/09/22/cyber-security-specialists-from-the-tennessee-national-aa5d31-1600.jpg" width="80%" alt="keyboard with lock - to stand for ransomware"/> <p>Finally, he pointed out that while recruitment is challenging, Switzerland has a system of mandatory military service and militia system. This means the armed forces is able to reach large parts of the population and provide them cyber security training and, it provided them with the means to recruit their first cyber bataillon. <p><b>Judith Bellaiche</b>, National Councillor and CEO of SWICO, described the evolution of cyber security as reflected in parliamentary discussions over the last 15 years. Political discourse in Swiss parliament reflects society and the topic of cyber security evolved from early years of getting aware, demanding reports and making sense to an increasing political demands for new official competencies and laws. Most impressively, she summarized this as raw fear: the parliamentary representatives feel that the situation is out of control and demand action, with proposals to expand the role of the state. In order to validate, representative surveys were conducted and it turned out that the general population is even more afraid and leans on expanding the role of the state. This is a political development which will likekly have consequences. It would be a major paradigm change if the role of the state was expanded to protect private companies from cyber security, however it does not look like the private sector can fix this themselves. The debate for expanding the technical and organizational competencies of the state is certain to affect Swiss society. <h1><b>Practitioners and the Private Sector: Ransomware</b></h1> <p>In a podium discussion organized by the World Economic Forum on "What’s next for multistakeholder action against cybercrime," it was articulated that ransomware is <b>the</b> biggest topic in cyber security. Countless conpanies are victims of ransomware attacks and the conference experience report from incident responders. <b>Jacky Fox</b>, managing director Accenture Security, told a war story of Irish hospitals losing their entire infrastructure, doctors having to resort to pen and paper and unable to operate diagnostic machines, unable to effectively treat patients and that incident responders had to face the impact of their decisions are matters of life and death. <p><img src="https://p1.pxfuel.com/preview/313/933/304/cyber-security-crack-crime-access-anti.jpg" width="80%" alt="keyboard with lock - to stand for ransomware"/> <p><b>Serge Droz</b>, seasoned incident responder and director of <a href="http://first.org">FIRST</a> (Forum of Incident Response and Security Teams) pointed out that case data is important. Investigating a single case of ransomware is imposible, while with data from 20-30 cases of ransomware provides basis for investigation as one can recognize patterns. He pointed out the difference in roles that in a multistakeholder action, incident responders have as first priority the return to normal conditions, while prosecution's first priority is to identify the attacker and forensics. <p><b>Maya Bundt</b>, Cyber Practice Leader at Swiss Re and representing insurances, pointed out how the idea of data sharing is much debated but rarely gets concrete. There are many different kinds of "data" involved in ransomware attacks, from technical security-relevant data like indicators of copmromise to case-specific data, how did something happen or specific data victim company, and that in the debate this is often all mixed up. For insurances, what matters is cost, and that the height of the ransom is often only a small fraction of the cost to a company which needs to deal with data loss, getting back to business and forensics. It was pointed that the role of the insurance company is to make the risk transferable, while the risk associated with a cyber security attack is not completely transferable. Insurances do play an important role since they can demand companies to implement practices and improve their posture, and they have an interest to do so, because if too many people are attacked, the insurances get so expensive that nobody will be able to afford them anymore. <h1><b>The White House: Chris Inglis on getting left of the attack</b></h1> Chris Inglis currently serves as the first <a href="https://www.whitehouse.gov/oncd/">US National Cyber Director</a> and advisor to the President of the United States Joe Biden on cybersecurity and of course the US persective was a highlight. His talk was a balance of rallying support and putting "cyberspace" into perspective with a conceptual framework. Cyberspace is a permanent reality that affects everyone's life. It affects whether electricity is available, whether public transportation works and whether your local store is able operate. Consequently, the defense of cyberspace is not about the defense of technical "stuff", but about the critical functions that are served by the stuff. From this, he derives three factors of cyber defense is about: <ul> <li>the stuff: already since the 60s-70s, we know about notions and challenges of technical quality <li>the people: not only the ones who are adjacent to stuff, but who work for and benefit from the critical functions that affect everyone's lives <li>the doctrine, or "roles and responsibilities" that we assign in society. He mentions the supply-chain attack SolarWinds and how many actors were aware but thought someone else is responsible, and that adversaries are actively looking out for weak doctrine. </ul> <svg id="wh-oval-625035bebead7" version="1.1" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" x="0px" y="0px" viewBox="0 0 343 243" style="enable-background:new 0 0 343 243;" xml:space="preserve"><style type="text/css"> #wh-oval-625035bebead7 .st0{fill:#0A2458;} #wh-oval-625035bebead7 .st1{fill-rule:evenodd;clip-rule:evenodd;fill:#FFFFFF;} #wh-oval-625035bebead7 .st2{opacity:0.5;fill:#FFFFFF;} #wh-oval-625035bebead7 .st3{fill:#FFFFFF;} </style><g> <ellipse class="st0" cx="171.5" cy="121.5" rx="168.6" ry="118.7"></ellipse> <path class="st1" d="M171.5,229.5c87.3,0,158-48.4,158-108s-70.7-108-158-108s-158,48.4-158,108S84.2,229.5,171.5,229.5z M290.8,209c-31.1,21.3-73.3,34-119.3,34s-88.2-12.7-119.3-34C21.1,187.8,0,157.1,0,121.5S21.1,55.2,52.2,34 C83.3,12.7,125.5,0,171.5,0s88.2,12.7,119.3,34c31,21.2,52.2,51.9,52.2,87.5S321.8,187.8,290.8,209z"></path></g><g> <g> <rect x="168.3" y="134.4" class="st2" width="2.7" height="8.5"></rect> <rect x="168.3" y="143.9" class="st2" width="2.7" height="8.5"></rect> <rect x="172" y="143.9" class="st2" width="2.7" height="8.5"></rect> <rect x="172" y="134.4" class="st2" width="2.7" height="8.4"></rect> <path class="st2" d="M212.5,92.7h-81.9c-0.6,0-1.1,0.5-1.1,1.1v66c0,0.6,0.5,1.1,1.1,1.1h81.8c0.7,0,1.2-0.5,1.2-1.2v-66 C213.6,93.2,213.1,92.7,212.5,92.7z M191.8,118.7c0-0.2,0.1-0.2,0.2-0.3c0.2-0.2,0.4-0.7,0.4-1.2v-7.2c0-0.4-0.2-0.8-0.4-1.2 c-0.1-0.1-0.2-0.2-0.2-0.3v-1.4c0-0.2,0.2-0.3,0.3-0.3h9.1c0.2,0,0.3,0.2,0.3,0.3v1.4c0,0.2-0.1,0.2-0.2,0.3 c-0.2,0.3-0.4,0.7-0.4,1.2v7.1c0,0.4,0.2,0.8,0.4,1.2c0.1,0.1,0.2,0.2,0.2,0.3v3.2c0,0.2-0.2,0.3-0.3,0.3h-9.1 c-0.2,0-0.3-0.2-0.3-0.3V118.7z M166.6,118.7c0-0.2,0.1-0.2,0.2-0.3c0.2-0.2,0.4-0.7,0.4-1.2v-7.1c0-0.4-0.2-0.8-0.4-1.2 c-0.1-0.1-0.2-0.2-0.2-0.3v-1.4c0-0.2,0.2-0.3,0.3-0.3h9.1c0.2,0,0.3,0.2,0.3,0.3v1.4c0,0.2-0.1,0.2-0.2,0.3 c-0.2,0.2-0.4,0.7-0.4,1.2v7.2c0,0.4,0.2,0.8,0.4,1.2c0.1,0.1,0.2,0.2,0.2,0.3v3.2c0,0.2-0.2,0.3-0.3,0.3h-9.1 c-0.2,0-0.3-0.2-0.3-0.3V118.7z M141.5,118.7c0-0.2,0.1-0.2,0.2-0.3c0.2-0.2,0.4-0.7,0.4-1.2v-7.1c0-0.4-0.2-0.8-0.4-1.2 c-0.1-0.1-0.2-0.2-0.2-0.3v-1.4c0-0.2,0.2-0.3,0.3-0.3h9.1c0.2,0,0.3,0.2,0.3,0.3v1.4c0,0.2-0.1,0.2-0.2,0.3 c-0.2,0.2-0.4,0.7-0.4,1.2v7.2c0,0.4,0.2,0.8,0.4,1.2c0.1,0.1,0.2,0.2,0.2,0.3v3.2c0,0.2-0.2,0.3-0.3,0.3h-9.1 c-0.2,0-0.3-0.2-0.3-0.3V118.7z M151.3,131.9c0,0.2-0.1,0.2-0.2,0.3c-0.2,0.2-0.4,0.7-0.4,1.2v8.7c0,0.4,0.2,0.8,0.4,1.2 c0.1,0.1,0.2,0.2,0.2,0.3v3.2c0,0.2-0.2,0.3-0.3,0.3h-9.1c-0.2,0-0.3-0.2-0.3-0.3v-3.2c0-0.2,0.1-0.2,0.2-0.3 c0.2-0.2,0.4-0.7,0.4-1.2v-8.7c0-0.4-0.2-0.8-0.4-1.2c-0.1-0.1-0.2-0.2-0.2-0.3v-1.4c0-0.2,0.2-0.3,0.3-0.3h9.1 c0.2,0,0.3,0.2,0.3,0.3V131.9z M151.8,127.9c-0.1,0.2-0.2,0.2-0.2,0.2h-10c-0.2,0-0.3-0.1-0.4-0.2c-0.1-0.2-0.1-0.4,0.1-0.5 c1.5-1.2,3.3-1.9,5.2-1.9c1.9,0,3.7,0.7,5.2,1.9C151.8,127.5,151.9,127.7,151.8,127.9z M176.3,154.1c-3.2,0-6.4,0-9.6,0 c-0.2,0-0.5-0.2-0.5-0.5c0-6.9,0-13.7,0-20.7c0.1-6.9,10.4-6.9,10.5,0c0,6.9,0,13.7,0.1,20.7C176.8,153.9,176.5,154.1,176.3,154.1 z M201.5,131.9c0,0.2-0.1,0.2-0.2,0.3c-0.2,0.2-0.4,0.7-0.4,1.2v8.7c0,0.4,0.2,0.8,0.4,1.2c0.1,0.1,0.2,0.2,0.2,0.3v3.2 c0,0.2-0.2,0.3-0.3,0.3h-9.1c-0.2,0-0.3-0.2-0.3-0.3v-3.2c0-0.2,0.1-0.2,0.2-0.3c0.2-0.2,0.4-0.7,0.4-1.2v-8.7 c0-0.4-0.2-0.8-0.4-1.2c-0.1-0.1-0.2-0.2-0.2-0.3v-1.4c0-0.2,0.2-0.3,0.3-0.3h9.1c0.2,0,0.3,0.2,0.3,0.3V131.9z M202,127.9 c-0.1,0.2-0.2,0.2-0.2,0.2h-10c-0.2,0-0.3-0.1-0.4-0.2c-0.1-0.2-0.1-0.4,0.1-0.5c1.5-1.2,3.3-1.9,5.2-1.9c1.9,0,3.7,0.7,5.2,1.9 C202,127.5,202.1,127.7,202,127.9z"></path> </g> <path class="st3" d="M192.6,72.3l2.4,1h20.3l0.8-1.1c0.2-0.2,0-0.4-0.2-0.4h-23.2C192.4,71.8,192.3,72.3,192.6,72.3z"></path> <path class="st3" d="M222.2,79.1v3.1c0,0.2,0.2,0.4,0.3,0.4h7.2c0.2,0,0.3-0.2,0.4-0.4v-3.1l1.2-1.7c0.2-0.2,0-0.6-0.2-0.6h-10 c-0.3,0-0.5,0.3-0.2,0.6L222.2,79.1z"></path> <path class="st3" d="M215.1,166.4h-40h-7.3h-39.9c-0.2,0-0.5,0.2-0.5,0.5v3.7c0,0.2,0.2,0.5,0.5,0.5h40h7.3h40 c0.2,0,0.5-0.2,0.5-0.5v-3.7C215.6,166.6,215.4,166.4,215.1,166.4z"></path> <path class="st3" d="M197.3,74.3l17.5,6.8c0.2,0.1,0.4-0.1,0.4-0.2v-2.1c0-0.1,0-0.1-0.1-0.2l-0.3-0.5c0-0.1-0.1-0.1-0.1-0.2v-3.7 L197.3,74.3L197.3,74.3z"></path> <path class="st3" d="M127.7,73.3H148l2.4-1c0.2-0.1,0.2-0.5-0.1-0.5h-23.2c-0.2,0-0.3,0.2-0.2,0.4L127.7,73.3z"></path> <path class="st3" d="M112.8,79.1v3.1c0,0.2,0.2,0.4,0.4,0.4h7.2c0.2,0,0.3-0.2,0.3-0.4v-3.1l1.2-1.7c0.2-0.2,0-0.6-0.2-0.6h-10 c-0.3,0-0.5,0.3-0.2,0.6L112.8,79.1z"></path> <path class="st3" d="M293.8,153.5h-79.4c-0.2-0.7-0.5-1.3-0.7-2h-7.8c-0.2,0-0.4,0.2-0.5,0.4c-0.2,0.6-0.4,1.2-0.7,1.8 c-0.2,0.6-0.6,1.2-1,1.7v3.2h-14.1v-3.2c-0.7-1.2-1.2-2.3-1.6-3.6c-0.1-0.2-0.2-0.4-0.5-0.4H180c-0.2,0-0.4,0.2-0.5,0.4 c-0.3,1.2-0.8,2.4-1.6,3.6v3.2h-5h-2.7h-5v-3.2c-0.7-1.1-1.2-2.3-1.6-3.6c-0.1-0.2-0.2-0.4-0.5-0.4h-7.5c-0.2,0-0.4,0.2-0.5,0.4 c-0.3,1.2-0.9,2.4-1.6,3.6v3.2h-14.1v-3.2c-0.3-0.6-0.7-1.2-1-1.7c-0.2-0.6-0.5-1.2-0.7-1.8c-0.1-0.2-0.2-0.4-0.5-0.4h-7.8 c-0.2,0.7-0.4,1.3-0.7,2H49.2c-0.4,0-0.7,0.3-0.7,0.7v6.8c0,0.4,0.3,0.7,0.7,0.7h78.2v2.6c0,0.4,0.3,0.7,0.7,0.7h42.1h2.7H215 c0.4,0,0.7-0.3,0.7-0.7v-2.6h78.2c0.4,0,0.7-0.3,0.7-0.7v-6.8C294.5,153.9,294.2,153.5,293.8,153.5z"></path> <path class="st3" d="M128.2,78.3l-0.3,0.5c0,0.1-0.1,0.1-0.1,0.2v2.1c0,0.2,0.2,0.3,0.4,0.2l17.5-6.8h-17.4L128.2,78.3 C128.3,78.2,128.3,78.2,128.2,78.3z"></path> <path class="st3" d="M187.5,107.8c0-0.2,0-0.5,0-0.7c0-3.7,0-7.1,0-7.1c0.2-1.2,0.5-2.3,1-3.5H179c0.5,1.1,0.8,2.2,1,3.5V119v3.8 v26.5v0.1c0,0.4,0.3,0.7,0.7,0.7h6.2c0.4,0,0.7-0.3,0.7-0.7c0,0,0-26.1,0-40.4C187.5,108.6,187.5,108.2,187.5,107.8z"></path> <path class="st3" d="M163.1,105.6c0.2-6.7,0.5-7.8,1-9h-9.6c0.5,1.1,0.8,2.2,1,3.5c0,0,0,1.1,0,3c0,14.2,0,46.4,0,46.4 c0,0.4,0.3,0.7,0.7,0.7h6.2c0.4,0,0.7-0.3,0.7-0.7v-0.1v-27v-3.3L163.1,105.6L163.1,105.6z"></path> <path class="st3" d="M56.4,87.7c0-0.3,0.3-0.7,0.7-0.7h11.6c0.4,0,0.7,0.2,0.7,0.7v2.9c0,0.3-0.3,0.7-0.7,0.7H57.1 c-0.4,0-0.7-0.2-0.7-0.7V87.7z M75.2,87.7c0-0.3,0.3-0.7,0.7-0.7h11.6c0.4,0,0.7,0.2,0.7,0.7v2.9c0,0.3-0.3,0.7-0.7,0.7H75.8 c-0.4,0-0.7-0.2-0.7-0.7L75.2,87.7L75.2,87.7z M93.9,87.7c0-0.3,0.3-0.7,0.7-0.7h11.6c0.4,0,0.7,0.2,0.7,0.7v2.9 c0,0.3-0.3,0.7-0.7,0.7H94.6c-0.4,0-0.7-0.2-0.7-0.7V87.7z M112.6,87.7c0-0.3,0.3-0.7,0.7-0.7h13.5c0.4,0,0.7,0.2,0.7,0.7v1.9h87.8 v-1.8c0-0.3,0.3-0.7,0.7-0.7h13.5c0.4,0,0.7,0.2,0.7,0.7v2.9c0,0.3-0.3,0.7-0.7,0.7H113.3c-0.4,0-0.7-0.2-0.7-0.7V87.7z M135,83.7 l36.5-14.7L208,83.7h-6.1l-15.2-6.1l-15.2-6.1l-15.2,6.1L141,83.7H135z M236.1,87.7c0-0.3,0.3-0.7,0.7-0.7h11.6 c0.4,0,0.7,0.2,0.7,0.7v2.9c0,0.3-0.3,0.7-0.7,0.7h-11.6c-0.4,0-0.7-0.2-0.7-0.7V87.7z M254.8,87.7c0-0.3,0.3-0.7,0.7-0.7h11.6 c0.4,0,0.7,0.2,0.7,0.7v2.9c0,0.3-0.3,0.7-0.7,0.7h-11.6c-0.4,0-0.7-0.2-0.7-0.7V87.7z M273.6,87.7c0-0.3,0.3-0.7,0.7-0.7h11.6 c0.4,0,0.7,0.2,0.7,0.7v2.9c0,0.3-0.3,0.7-0.7,0.7h-11.6c-0.4,0-0.7-0.2-0.7-0.7V87.7z M218.5,127.6l5.2-2.2c0.1,0,0.1,0,0.2,0 s0.1,0,0.2,0l5.2,2.2c0.2,0.1,0.2,0.2,0.2,0.4c0,0.2-0.2,0.2-0.3,0.2h-10.5c-0.2,0-0.3-0.1-0.3-0.2 C218.2,127.8,218.3,127.6,218.5,127.6z M219,118.7c0-0.1,0.1-0.2,0.2-0.3c0.2-0.3,0.4-0.7,0.4-1.2v-7.2c0-0.5-0.2-0.9-0.4-1.2 c-0.1-0.1-0.2-0.2-0.2-0.3v-1.4c0-0.2,0.2-0.3,0.3-0.3h9.1c0.2,0,0.3,0.2,0.3,0.3v1.4c0,0.1-0.1,0.2-0.2,0.3 c-0.2,0.3-0.4,0.7-0.4,1.2v7.1c0,0.5,0.2,0.9,0.4,1.2c0.1,0.1,0.2,0.2,0.2,0.3v3.2c0,0.2-0.2,0.3-0.3,0.3h-9.1 c-0.2,0-0.3-0.2-0.3-0.3V118.7z M237.3,127.4c0.2-0.2,0.3-0.2,0.5-0.4c1.4-1,3-1.5,4.7-1.5c1.9,0,3.7,0.7,5.2,1.9l0.1,0.1 c0.1,0.1,0.1,0.2,0.1,0.3s-0.1,0.2-0.2,0.2c0,0,0,0-0.1,0c0,0,0,0-0.1,0h-0.1h-10c-0.1,0-0.2-0.1-0.2-0.1c-0.1-0.1-0.1-0.1-0.1-0.2 C237.2,127.7,237.2,127.5,237.3,127.4z M237.7,118.7c0-0.1,0.1-0.2,0.2-0.3c0.2-0.3,0.4-0.7,0.4-1.2v-7.2c0-0.5-0.2-0.9-0.4-1.2 c-0.1-0.1-0.2-0.2-0.2-0.3v-1.4c0-0.2,0.2-0.3,0.3-0.3h9.1c0.2,0,0.3,0.2,0.3,0.3v1.4c0,0.1-0.1,0.2-0.2,0.3 c-0.2,0.3-0.4,0.7-0.4,1.2v7.1c0,0.5,0.2,0.9,0.4,1.2c0.1,0.1,0.2,0.2,0.2,0.3v3.2c0,0.2-0.2,0.3-0.3,0.3H238 c-0.2,0-0.3-0.2-0.3-0.3V118.7z M255.9,127.6l5.3-2.2c0.1,0,0.1,0,0.2,0h0.1c0,0,0,0,0.1,0l5.2,2.2c0.1,0,0.1,0.1,0.2,0.1 c0.1,0.1,0.1,0.2,0.1,0.2c0,0.1-0.1,0.2-0.2,0.2c-0.1,0-0.1,0.1-0.2,0.1h-10.5c-0.1,0-0.2,0-0.2-0.1c-0.1,0-0.1-0.1-0.2-0.2V128 C255.7,127.9,255.7,127.7,255.9,127.6z M256.5,118.7c0-0.1,0.1-0.2,0.2-0.3c0.2-0.3,0.4-0.7,0.4-1.2v-7.2c0-0.5-0.2-0.9-0.4-1.2 c-0.1-0.1-0.2-0.2-0.2-0.3v-1.4c0-0.2,0.2-0.3,0.3-0.3h9.1c0.2,0,0.3,0.2,0.3,0.3v1.4c0,0.1-0.1,0.2-0.2,0.3 c-0.2,0.3-0.4,0.7-0.4,1.2v7.1c0,0.5,0.2,0.9,0.4,1.2c0.1,0.1,0.2,0.2,0.2,0.3v3.2c0,0.2-0.2,0.3-0.3,0.3h-9.1 c-0.2,0-0.3-0.2-0.3-0.3V118.7z M274.8,127.4c0.2-0.2,0.3-0.2,0.5-0.4c1.4-1,3-1.5,4.7-1.5c1.9,0,3.7,0.7,5.2,1.9l0.1,0.1 c0.1,0.1,0.1,0.2,0.1,0.3s-0.1,0.2-0.2,0.2c0,0,0,0-0.1,0c0,0,0,0-0.1,0H285h-10c-0.1,0-0.2-0.1-0.2-0.1c-0.1-0.1-0.1-0.1-0.1-0.2 C274.6,127.7,274.7,127.5,274.8,127.4z M275.2,118.7c0-0.1,0.1-0.2,0.2-0.3c0.2-0.3,0.4-0.7,0.4-1.2v-7.2c0-0.5-0.2-0.9-0.4-1.2 c-0.1-0.1-0.2-0.2-0.2-0.3v-1.4c0-0.2,0.2-0.3,0.3-0.3h9.1c0.2,0,0.3,0.2,0.3,0.3v1.4c0,0.1-0.1,0.2-0.2,0.3 c-0.2,0.3-0.4,0.7-0.4,1.2v7.1c0,0.5,0.2,0.9,0.4,1.2c0.1,0.1,0.2,0.2,0.2,0.3v3.2c0,0.2-0.2,0.3-0.3,0.3h-9.1 c-0.2,0-0.3-0.2-0.3-0.3V118.7z M284.9,131.9c0,0.1-0.1,0.2-0.2,0.3c-0.2,0.3-0.4,0.7-0.4,1.2v8.7c0,0.5,0.2,0.9,0.4,1.2 c0.1,0.1,0.2,0.2,0.2,0.3v3.2c0,0.2-0.2,0.3-0.3,0.3h-9.1c-0.2,0-0.3-0.2-0.3-0.3v-3.2c0-0.1,0.1-0.2,0.2-0.3 c0.2-0.3,0.4-0.7,0.4-1.2v-8.7c0-0.5-0.2-0.9-0.4-1.2c-0.1-0.1-0.2-0.2-0.2-0.3v-1.4c0-0.2,0.2-0.3,0.3-0.3h9.1 c0.2,0,0.3,0.2,0.3,0.3V131.9z M266.2,131.9c0,0.1-0.1,0.2-0.2,0.3c-0.2,0.3-0.4,0.7-0.4,1.2v8.7c0,0.5,0.2,0.9,0.4,1.2 c0.1,0.1,0.2,0.2,0.2,0.3v3.2c0,0.2-0.2,0.3-0.3,0.3h-9.1c-0.2,0-0.3-0.2-0.3-0.3v-3.2c0-0.1,0.1-0.2,0.2-0.3 c0.2-0.3,0.4-0.7,0.4-1.2v-8.7c0-0.5-0.2-0.9-0.4-1.2c-0.1-0.1-0.2-0.2-0.2-0.3v-1.4c0-0.2,0.2-0.3,0.3-0.3h9.1 c0.2,0,0.3,0.2,0.3,0.3V131.9z M247.4,131.9c0,0.1-0.1,0.2-0.2,0.3c-0.2,0.3-0.4,0.7-0.4,1.2v8.7c0,0.5,0.2,0.9,0.4,1.2 c0.1,0.1,0.2,0.2,0.2,0.3v3.2c0,0.2-0.2,0.3-0.3,0.3H238c-0.2,0-0.3-0.2-0.3-0.3v-3.2c0-0.1,0.1-0.2,0.2-0.3 c0.2-0.3,0.4-0.7,0.4-1.2v-8.7c0-0.5-0.2-0.9-0.4-1.2c-0.1-0.1-0.2-0.2-0.2-0.3v-1.4c0-0.2,0.2-0.3,0.3-0.3h9.1 c0.2,0,0.3,0.2,0.3,0.3V131.9z M228.7,131.9c0,0.1-0.1,0.2-0.2,0.3c-0.2,0.3-0.4,0.7-0.4,1.2v8.7c0,0.5,0.2,0.9,0.4,1.2 c0.1,0.1,0.2,0.2,0.2,0.3v3.2c0,0.2-0.2,0.3-0.3,0.3h-9.1c-0.2,0-0.3-0.2-0.3-0.3v-3.2c0-0.1,0.1-0.2,0.2-0.3 c0.2-0.3,0.4-0.7,0.4-1.2v-8.7c0-0.5-0.2-0.9-0.4-1.2c-0.1-0.1-0.2-0.2-0.2-0.3v-1.4c0-0.2,0.2-0.3,0.3-0.3h9.1 c0.2,0,0.3,0.2,0.3,0.3V131.9z M124.4,128.3h-10.5c-0.2,0-0.3-0.1-0.3-0.2c0-0.2,0.1-0.3,0.2-0.4l5.2-2.2c0.1,0,0.1,0,0.2,0 c0.1,0,0.1,0,0.2,0l5.2,2.2c0.2,0.1,0.2,0.2,0.2,0.4C124.7,128.2,124.6,128.3,124.4,128.3z M124,131.9c0,0.1-0.1,0.2-0.2,0.3 c-0.2,0.3-0.4,0.7-0.4,1.2v8.7c0,0.5,0.2,0.9,0.4,1.2c0.1,0.1,0.2,0.2,0.2,0.3v3.2c0,0.2-0.2,0.3-0.3,0.3h-9.1 c-0.2,0-0.3-0.2-0.3-0.3v-3.2c0-0.1,0.1-0.2,0.2-0.3c0.2-0.3,0.4-0.7,0.4-1.2v-8.7c0-0.5-0.2-0.9-0.4-1.2c-0.1-0.1-0.2-0.2-0.2-0.3 v-1.4c0-0.2,0.2-0.3,0.3-0.3h9.1c0.2,0,0.3,0.2,0.3,0.3V131.9z M105.8,127.9c0,0.1-0.1,0.1-0.1,0.2l-0.1,0.1 c-0.1,0-0.1,0.1-0.2,0.1h-10c-0.1,0-0.1,0-0.2-0.1c-0.2-0.1-0.2-0.2-0.2-0.2s0-0.2,0-0.3c0,0,0-0.1,0.1-0.1c0,0,0-0.1,0.1-0.1 c0.3-0.3,0.7-0.6,1.2-0.8c0,0,0.1,0,0.1-0.1c1.2-0.7,2.6-1.1,4-1.1c1.9,0,3.7,0.7,5.2,1.9C105.8,127.5,105.9,127.7,105.8,127.9z M105.3,131.9c0,0.1-0.1,0.2-0.2,0.3c-0.2,0.3-0.4,0.7-0.4,1.2v8.7c0,0.5,0.2,0.9,0.4,1.2c0.1,0.1,0.2,0.2,0.2,0.3v3.2 c0,0.2-0.2,0.3-0.3,0.3h-9.1c-0.2,0-0.3-0.2-0.3-0.3v-3.2c0-0.1,0.1-0.2,0.2-0.3c0.2-0.3,0.4-0.7,0.4-1.2v-8.7 c0-0.5-0.2-0.9-0.4-1.2c-0.1-0.1-0.2-0.2-0.2-0.3v-1.4c0-0.2,0.2-0.3,0.3-0.3h9.1c0.2,0,0.3,0.2,0.3,0.3V131.9z M87.1,128.2 c-0.1,0-0.1,0.1-0.2,0.1H76.4c-0.1,0-0.1,0-0.2,0s-0.2-0.1-0.2-0.2V128c0-0.2,0.1-0.3,0.2-0.4l5.2-2.2c0,0,0,0,0.1,0h0.1 c0.1,0,0.1,0,0.2,0l5.2,2.2c0.2,0.1,0.2,0.2,0.2,0.4C87.2,128.1,87.2,128.2,87.1,128.2z M86.6,131.9c0,0.1-0.1,0.2-0.2,0.3 c-0.2,0.3-0.4,0.7-0.4,1.2v8.7c0,0.5,0.2,0.9,0.4,1.2c0.1,0.1,0.2,0.2,0.2,0.3v3.2c0,0.2-0.2,0.3-0.3,0.3h-9.1 c-0.2,0-0.3-0.2-0.3-0.3v-3.2c0-0.1,0.1-0.2,0.2-0.3c0.2-0.3,0.4-0.7,0.4-1.2v-8.7c0-0.5-0.2-0.9-0.4-1.2c-0.1-0.1-0.2-0.2-0.2-0.3 v-1.4c0-0.2,0.2-0.3,0.3-0.3h9.1c0.2,0,0.3,0.2,0.3,0.3V131.9z M68.4,127.9c0,0.1-0.1,0.2-0.2,0.2c-0.1,0-0.1,0.1-0.2,0.1H58 c0,0,0,0-0.1,0h-0.1c-0.1-0.1-0.2-0.2-0.2-0.2c-0.1-0.2,0-0.3,0.1-0.4c0,0,0,0,0.1-0.1c0.2-0.1,0.2-0.2,0.4-0.3 c0.1-0.1,0.2-0.1,0.2-0.2c0.2-0.1,0.2-0.2,0.4-0.2c0.1-0.1,0.2-0.2,0.4-0.2c1.2-0.6,2.4-0.9,3.7-0.9c1.9,0,3.7,0.7,5.2,1.9 C68.4,127.5,68.4,127.7,68.4,127.9z M67.8,131.9c0,0.1-0.1,0.2-0.2,0.3c-0.2,0.3-0.4,0.7-0.4,1.2v8.7c0,0.5,0.2,0.9,0.4,1.2 c0.1,0.1,0.2,0.2,0.2,0.3v3.2c0,0.2-0.2,0.3-0.3,0.3h-9.1c-0.2,0-0.3-0.2-0.3-0.3v-3.2c0-0.1,0.1-0.2,0.2-0.3 c0.2-0.3,0.4-0.7,0.4-1.2v-8.7c0-0.5-0.2-0.9-0.4-1.2c-0.1-0.1-0.2-0.2-0.2-0.3v-1.4c0-0.2,0.2-0.3,0.3-0.3h9.1 c0.2,0,0.3,0.2,0.3,0.3V131.9z M58.1,118.7c0-0.1,0.1-0.2,0.2-0.3c0.2-0.3,0.4-0.7,0.4-1.2v-7.1c0-0.5-0.2-0.9-0.4-1.2 c-0.1-0.1-0.2-0.2-0.2-0.3v-1.5c0-0.2,0.2-0.3,0.3-0.3h9.1c0.2,0,0.3,0.2,0.3,0.3v1.4c0,0.1-0.1,0.2-0.2,0.3 c-0.2,0.3-0.4,0.7-0.4,1.2v7.2c0,0.5,0.2,0.9,0.4,1.2c0.1,0.1,0.2,0.2,0.2,0.3v3.2c0,0.2-0.2,0.3-0.3,0.3h-9.1 c-0.2,0-0.3-0.2-0.3-0.3L58.1,118.7L58.1,118.7z M76.8,118.7c0-0.1,0.1-0.2,0.2-0.3c0.2-0.3,0.4-0.7,0.4-1.2v-7.1 c0-0.5-0.2-0.9-0.4-1.2c-0.1-0.1-0.2-0.2-0.2-0.3v-1.5c0-0.2,0.2-0.3,0.3-0.3h9.1c0.2,0,0.3,0.2,0.3,0.3v1.4c0,0.1-0.1,0.2-0.2,0.3 c-0.2,0.3-0.4,0.7-0.4,1.2v7.2c0,0.5,0.2,0.9,0.4,1.2c0.1,0.1,0.2,0.2,0.2,0.3v3.2c0,0.2-0.2,0.3-0.3,0.3h-9.1 c-0.2,0-0.3-0.2-0.3-0.3V118.7z M95.6,118.7c0-0.1,0.1-0.2,0.2-0.3c0.2-0.3,0.4-0.7,0.4-1.2v-7.1c0-0.5-0.2-0.9-0.4-1.2 c-0.1-0.1-0.2-0.2-0.2-0.3v-1.5c0-0.2,0.2-0.3,0.3-0.3h9.1c0.2,0,0.3,0.2,0.3,0.3v1.4c0,0.1-0.1,0.2-0.2,0.3 c-0.2,0.3-0.4,0.7-0.4,1.2v7.2c0,0.5,0.2,0.9,0.4,1.2c0.1,0.1,0.2,0.2,0.2,0.3v3.2c0,0.2-0.2,0.3-0.3,0.3h-9.1 c-0.2,0-0.3-0.2-0.3-0.3V118.7z M114.3,118.7c0-0.1,0.1-0.2,0.2-0.3c0.2-0.3,0.4-0.7,0.4-1.2v-7.1c0-0.5-0.2-0.9-0.4-1.2 c-0.1-0.1-0.2-0.2-0.2-0.3v-1.5c0-0.2,0.2-0.3,0.3-0.3h9.1c0.2,0,0.3,0.2,0.3,0.3v1.4c0,0.1-0.1,0.2-0.2,0.3 c-0.2,0.3-0.4,0.7-0.4,1.2v7.2c0,0.5,0.2,0.9,0.4,1.2c0.1,0.1,0.2,0.2,0.2,0.3v3.2c0,0.2-0.2,0.3-0.3,0.3h-9.1 c-0.2,0-0.3-0.2-0.3-0.3V118.7z M49.2,85.9h2.1c0,0,0.1,0,0.1,0.1v8.5c0,0.2,0.2,0.3,0.4,0.3h76.1c0.3,0.6,0.7,1.2,0.9,1.8 c-4.8,0-79.5,0-79.5,0c-0.2,0-0.5,0.2-0.5,0.5v2.4c0,0.3,0.2,0.5,0.5,0.5h80.3c0,0,0,0.7,0,2.2H51.6c-0.2,0-0.2,0.1-0.2,0.2v47.7 c0.2,0,85.2,0,85.2,0c0.4,0,0.7-0.3,0.7-0.7v-0.1c0-0.4,0.1-48.9,0.1-48.9c0-0.3,0-0.4,0-0.4c0.1-0.9,0.3-1.8,0.6-2.7 c0.3-0.9,0.7-1.7,1.2-2.6h29.8h5.2h29.8c0.5,0.8,0.9,1.7,1.2,2.6s0.5,1.7,0.6,2.7c0,0,0,0.2,0,0.4c0,0,0.1,48.5,0.1,48.9v0.1 c0,0.4,0.3,0.7,0.7,0.7l83.1-0.1h0.2h1.6c0.2,0,0.2-0.1,0.2-0.2v-47.4c0-0.2-0.1-0.2-0.2-0.2h-78.1c0-1.3,0-2.1,0-2.2h80.3 c0.3,0,0.5-0.2,0.5-0.5v-2.4c0-0.2-0.2-0.5-0.5-0.5c0,0-74.7,0-79.5,0c0.2-0.6,0.6-1.2,0.9-1.8h76.2c0.2,0,0.4-0.1,0.4-0.3V86 c0,0,0-0.1,0.1-0.1h2.1c0.3,0,0.5-0.2,0.5-0.5v-1.1c0-0.3-0.2-0.5-0.5-0.5H218c-0.3,0-0.7-0.1-1-0.2l-44.7-18.1v-2.9v-0.9 c0.9-0.2,2.2-0.5,3.2-0.2c5.1,1.7,6.2,1.1,6.2,1c-0.9-1.2-0.6-3.9-0.6-3.9c0.3-2.4,0.2-2.9,0.2-3c-0.2,0.1-1.6,0.9-5-0.2 c-1.8-0.6-3.2-0.7-4-0.8c0-0.5-0.4-0.8-0.9-0.8c-0.4,0.1-0.7,0.4-0.7,0.9v10.7l-44.7,18.1c-0.3,0.2-0.7,0.2-1,0.2H49.2 c-0.3,0-0.5,0.2-0.5,0.5v1.1C48.7,85.7,48.9,85.9,49.2,85.9z"></path> <polygon class="st3" points="198.6,96.5 171.5,126.9 144.4,96.5 144.4,96.5 144,96.7 171.2,127.3 171.2,127.2 171.7,127.2 171.8,127.3 198.9,96.8 198.9,96.7 "></polygon> <path class="st3" d="M172.3,125.7h-1.7c-0.4,0-0.6,0.3-0.6,0.6v4.8c0,0.4,0.3,0.6,0.6,0.6h1.7c0.4,0,0.6-0.3,0.6-0.6v-4.8 C172.9,125.9,172.7,125.7,172.3,125.7z"></path></g></svg> <p>Defense is fragmented, because cyberspace is not perceived as a universal, ambient sphere of society but as disconnected patches, which means adversaries can deal with a target one at a time but are not countered with a coordinated response. <p>He advocates for resilience by design, arguing that there are too many variables for anything to be called "secure." Actually defend, not the "stuff" but the critical functions, ensure that anomaly can be detected as early as possible and countered with maximal leverage, away from current practices that assume an ineffective and meaningless division of labor. He points out that cyberspace is a shared resource, and this type of defense was done successfully many times before: in the automobile industry, in the aviation industry, for food and drugs. When asked on the role of regulation, he answered that first comes a general understanding of what constitues common practice, tailored to the sector of interest, and only then there might be regulation as a last resort, preferably coordinated and not in 50 overlapping variants. <h1>Wrapping up</h1> <p>There was a lot more to the conference, but I will end here. It is clear to see that society at large, including Switzerland, and is definitely aware and things will happen. I noticed people mentioned "supply chain" and also SBOM and the recent NIST standard, how to improve security posture in healthcare which depends on industry as well as certifications (security patches can apparently break certification) and much more. I picked "polaroid" as a title because just like the good old notions, practices and challenges around software quality that in Chris Inglis's words accompany the software industry since the 60s, polaroid (instant pictures) is something that comes, goes bankrupt, and comes back since we cannot really do without. <p>I share the optimism of some of the speakers like Serge Droz who say that after many years of helplessness, due to raised awareness and a more coordinated action, we as society should be able to tackle the ransomware phenomenon. Beyond technical questions, establishing practices and security standards will require everyone, society at large to take part in the effort.Burak Emirhttp://www.blogger.com/profile/01651698653168460072noreply@blogger.com0tag:blogger.com,1999:blog-28320851.post-51606589973214327322022-03-29T02:55:00.000-07:002022-03-29T02:55:52.830-07:00The ultimate cross language guide to modules and packages<table> <tr> <th>Intent \ Language</th> <th>go</th> <th>java</th> <th>rust</th> <th>npm</th> <th>racket</th> </tr> <tr> <td>source file</td> <td>-</td> <td>-</td> <td>-</td> <td>-</td> <td>module</td> </tr> <tr> <td>logical grouping</td> <td>package</td> <td>package</td> <td>module</td> <td>module</td> <td>collection</td> </tr> <tr> <td>unit of release</td> <td>module</td> <td>module</td> <td>package <br/> (crate)</td> <td>package</td> <td>package</td> </tr></table> <p>racket is special in that a racket package (*) may happily mess with multiple collections.Burak Emirhttp://www.blogger.com/profile/01651698653168460072noreply@blogger.com0tag:blogger.com,1999:blog-28320851.post-22595988776281719262021-12-11T02:02:00.011-08:002022-01-24T00:13:53.059-08:00Worked example of automata runs as sheaves<p>This post is about "sheaf semantics," a way to model, understand, talk about dynamic phenomena. The situation is that you have some representation of such phenomena - possibly partial - and you would like to assign meaning, and are mathematical/logical model for that purpose. Here, we have such a possible mathematical model. This is a semantic technique - we do not consider the representation you may have on your hands, the data, evidence or proof, only the model. <p>As we will see, sheaves can help describe observations of dynamic phenomena in a precise way — even if these observation are partial, ambiguous, admit multiple interpretations. The definition of a sheaf is technical and presupposes familiarity with a bunch of abstract math. Many people who need to describe "observations of dynamic phenomena" as part of their day-job will not find it very accessible. <p>I am going to use automata theory as example since I assume this is understood by a lot more people (informaticians, programmers) than sheaves. There are textbooks on regular expressions and automata, but I am quite sure there are no textbooks that would connect it to sheaves. I am not advocating that automata theory be taught this way, but believe this might be a good way to introduce a method and vocabulary that one may find helpful to describe (fragments of) reality. For anyone who shares the motivation, but finds this example simplistic, Joseph A. Goguen's paper "Sheaf semantics for concurrent interacting objects" may be a good thing to read next. $\newcommand{\redq}{\color{red}{q}}$ $\newcommand{\redQ}{\color{red}{Q}}$ <h2>Warm-up: Deterministic Automata</h1> <p>This is (roughly) how I learnt about deterministic automata (state machines): <ul> <li>we have a set $\Sigma$ of input symbols $a, b, \ldots$ called alphabet. $\Sigma^*$ is the set of sequences $w$ of such letters (or, if you prefer, the free monoid on generators $\Sigma$), with $\epsilon$ the empty word. <li>$\redQ$ a set of states (assumed finite), <li> a transition <i>function</i> $\delta: \redQ \times \Sigma \rightarrow \redQ$. </ul> <p>We <i>extend</i> the transition function $\delta: \redQ \times \Sigma \rightarrow \redQ$ to a transition function $\delta^*: \redQ \times \Sigma^* \rightarrow \redQ$ <i>as usual</i>, meaning: $$\begin{array}{ll} \delta^*(q, \epsilon) &= q \\ \delta^*(q, aw) &= \delta^*(\delta(q, a), w) \\ \end{array} $$ <p>This extended transition function tells us where we end up if we start in a state $\redq$ and feed it a word. <center> <a title="1st Macguy314, reworked by Perhelion / German translation by Babakus, Public domain, via Wikimedia Commons" href="https://commons.wikimedia.org/wiki/File:Finite_state_machine_example_with_comments.svg"><img width="256" alt="Finite state machine example with comments: two states for door open, door closed" src="https://upload.wikimedia.org/wikipedia/commons/thumb/c/cf/Finite_state_machine_example_with_comments.svg/256px-Finite_state_machine_example_with_comments.svg.png"></a> </center> <p>So far so good. Now, let's imagine that we do not only want the destination state, but also the sequence of states we passed through. <p>We can write a sequence of alternating state and input symbols $$[ \redq_{\color{red}{(0)}} ] \ \ a_{(0)} \redq_{\color{red}{(1)}} ~ a_{(1)} \redq_{\color{red}{(2)}} ~ \ldots ~ a_{(n-1)} \redq_{(n)}$$ such that $\redq_{(0)} = \redq_0$ and $\delta(\redq_{\color{red}{(i)}}, a_{(i)}) = \redq_{\color{red}{(i+1)}}$. Since the $\redq_0$ does not add much value, let's define a <i>run</i> as being this sequence, without $\redq_0$: $$(a_{(0)}, \redq_{\color{red}{(1)}}) ~ (a_{(1)}, \redq_{\color{red}{(2)}}) ~ \ldots ~ (a_{(n-1)}, \redq_{(n)})$$ <p>The <i>set of all possible runs</i> counts as a "behavior" of the automaton described by the transition function $\delta$ and starting state $\redq_0$. Since $\delta$ is a function, a given word $w$ will correspond to exactly one run, not more, not less. <p>Let's write down the set of all runs, for a non-empty word $w$: $$ \mathrm{runs}(w) = \{ v\ \in (\Sigma \times \redQ)^*\ \ |\ \mathit{IsRun}(v)\ \}$$ where $\mathit{IsRun}$ is a constraint $v_{(i)} = (a_{(i)}, \delta(\redq_{(i)}, a_{(i)}))$ for $0 \leq i \lt n$, where $\redq_{(i)}$ is either $\redq_0$ or obtained from $v_{(i-1)}$. So far, this is not new or different from the $\delta^*$ definition above. <h2> Presheaf </h2> <p>Let us take a different path to this definition of "set of runs". We call <i>prefix domain</i> any set $U$ of natural numbers that is <i>downward-closed</i> with respect to the normal ordering of natural numbers. Think of such sets to be assembled in a <i>space</i> $X$ like $\{\{\}, \{0\}, \{0, 1\}, \ldots, \{0, ..., n - 1\}, \ldots\} \cup \{\omega\}$ that is closed under pair-wise intersection and arbitrary union. The letter $\omega$ here stands for the set of all natural numbers, and by "arbitrary" we mean even an infinite union of prefix domains would be a prefix domain. The set $ \{0, ..., n - 1\}$ is abbreviated with $[n]$, with $[0]$ being empty set. <p>The conditions do not require that "all" possible prefixes are present in $X$. A space of prefix domains can be finite or have gaps, like $\{[0], [1], [2], [5]\}$. <!-- The idea is that if we know how to extend a prefix given run to obtain a new run, we know all there is to know in terms of "behavior." <p>Our domains are sets $\{\{\}, \{0\}, \{0, 1\}, \ldots, \{0, ..., n\}, \ldots\} \cup \{\omega\}$. The $\omega$ stands for the set of all natural numbers and ensures that even if we took an infinite union of ever-larger prefix domains, it would still be contained in there. Note how downward closure ensures that either $U_1 \cap U_2 = U_1$ or $U_1 \cap U_2 = U_2$ (or both). These conditions ensure that $X$ is a topological space but don't worry about this.</p>--><p>A word of length $n$ can be described as a function from $w: [n] \rightarrow \Sigma$. Since we want to talk about the run on a word, let's consider an "observation" $U$ to be function that maps each point of the prefix domain to a pair $(a, \redq)$ at that position. These functions are the reason why we called those sets "domains". In symbols: $$\mathcal{O}(U) = \{~ f: U \rightarrow \Sigma \times \redQ ~|~ \mathit{IsRun}(f) \ \} $$ <!-- The set-builder notation - logic again - says that we'll accept any $f$ that satisfies the constraint described by -->where $\mathit{IsRun}$ are the same constraints as before: <ul> <li>if $0 \in U$, then $f(0) = (a_0, \redq_{\color{red}{(1)}})$ <li>if $u \in U$ and $u > 0$, then $f(u) = \delta(f(u-1))$ </ul> <p>This really is saying the same thing as before, except that for any position $u$, we have a definition for $f(u)$ that is connected directly to $f(u-1)$, which we can do because $U$ is downwards-closed. <p>The functions $f$ are called <i>sections</i> of $\mathcal{O}$ under $U$. <h2>Restriction and Contravariance</h2> <p>For prefix domains $U, V$, if we have $U \subseteq V$, then $U$ is — you guessed it — a prefix domain for a prefix of $V$. Write $i: U \rightarrow V$ for the inclusion of $U$ into $V$. <p>To this inclusion corresponds an operation we can do on any section $f \in \mathcal{O}(V)$, namely the restriction $f\big|_U: U \rightarrow \Sigma \times \redQ$ which is the same as $f$ but only defined on domain $U$. <p>This correspondence can be written as $\mathcal{O}(i): \mathcal{O}(V) \rightarrow \mathcal{O}(U)$. Note the direction of the arrow. <p>With a few mild conditions regarding the "identity inclusion" and composition, we can consider $\mathcal{O}$ a contravariant functor. And any contravariant functor $\mathcal{O}: X^{op} \rightarrow \mathsf{Sets}$ for topological space $X$ is called a presheaf. <h2>Non-determinism and the Sheaf Condition</h2> <p>For a presheaf to be considered a sheaf it needs to satisfy the sheaf condition, (1) locality and (2) gluing. Before we get to these, we want to consider non-determinism. <p>A nondeterministic automaton has a transition <i>relation</i> $\delta \subseteq \redQ \times \Sigma \times \redQ$ instead of a function. <p>For figuring out the set of runs, we could now keep track of the <i>sets</i> of states the automaton can be on. Let's not do that and use our presheaves instead: $$\mathcal{O}(U) = \{~ f: U \rightarrow \redQ \times \Sigma ~|~ \mathit{IsRun}(f) \ \} $$ <!-- The set-builder notation - logic again - says that we'll accept any $f$ that satisfies the constraint described by -->where $\mathit{IsRun}$ is very similar to the constraint we had before, just adapted to the non-determinism that comes from $\delta$ being a relation: <ul> <li>if $0 \in U$, then $f(0) = (a_0, \redq) $ for some $\redq$ with $(\redq_0, a, \redq) \in \delta$ <li>for all $u \in U$, if $u > 0$ and $f(u-1) = (\_, \redq)$ and $f(u) = (a, \redq')$ then $(\redq, a, \redq') \in \delta$ </ul> <p>The determinism is gone, as intended: we describe the set of runs, but if someone showed us a <em>suffix</em> of a particular run we would not (in general) be able to tell what happened before. Depending on the $\delta$, we may be able to make some inference. <h2>Sheaf Condition</h2> <p>We almost have enough material to understand the sheaf condition. A set of elements $(U_i)_{i \in I}$ from our space <i>covers</i> $U$ if $U \subseteq \bigcup_{i \in I} U_i$. <ul> <li>locality: if $(U_i)_{i \in I}$ cover $U$ and $s, t \in \mathcal{O}(U)$ with $s\big|_{U_i} = f\big|_{U_i}$ for all $i$, then $s = t$. <li>gluing: if $(U_i)_{i \in I}$ cover $U$ and we have $s_i \in \mathcal{O}(U_i)$ such that they agree on the overlaps $U_i \cap U_j$, then there is a section $s \in \mathcal{O}(U)$ and $s\big|_{U_i} = s_i$. </ul> <p>For our space of prefix domains and automata runs, we shall find that it satisfies the sheaf condition (what can you say about covers for prefix domain?). <h2>Snippet Domains and More States</h2> <p>The prefix domains were chosen as a simple example, and the condition may get more interesting if we consider other spaces. Consider "snippet domains" which are sets of contiguous sequences. Now the sheaf condition is more useful as it you can glue the snippets of a run together to obtain an actual run. <p>From automata (finite state machines), one can start generalizing to other structures. What if the state space is very large (parameterized in some way)? Or we leave certain things unspecified or asynchronous like in I/O automata. <p>Modeling observations as sheaves means defining a suitable domain for fragments of observation, along with mappings that provide the observation data while doing this in a way that permits the combination of the fragments into a "complete observation". These domains do not have to involve time, but tracking behavior over time is a domain where formal approaches quickly get unwieldy and the intuitive sense we can make of the sheaf condition can prove helpful. There is a whole, and the sheaves describe its "parts". When we solve a jigsaw puzzle, we are taking advantage of a gluing condition of sorts. <p>Many of these advantages remain in effect when working only with presheaves. Consider transitions between states are labeled not with symbols, but with expressions. This opens up the direction of modeling concurrent processes, like Joyal, Nielsen, Winskel describe in <a href="https://www.brics.dk/RS/94/7/BRICS-RS-94-7.pdf">Bisimulation from Open Maps</a>. <h2>Coda</h2> <p>In math, it seems to be well known and somewhat accepted that knowledge acquired by some same-generation group of researchers may get "lost" if it does not make it to textbooks (it appears in some monographs and is available to specialists, but from undergrads to grad school to full professors, people need to traverse a rocky path to get to an understanding which was a mere baseline in that same-generation group of researchers). <p>In informatics, we have means of codifying knowledge in the forms of programs, type systems. It is then ironic that formalization itself leads to "loss" since the various syntactic and insignificant choices lead to fragmentation among researchers. This can be observed in the modeling of concurrent systems, where the scientific process has led to whole subfields where specialists in one area could not be bothered to learn the language of the others. <p>If we want to preserve and codify knowledge in ways that does not get fragmented in space or time, it may be good to make use of more abstract modeling languages, as provided by category theory, and leverage the succinctness that comes from aspects that are left informal (semantic techniques). Presheaves seem like a technique where formal and informal can be mixed in ways that can be useful in this sense. Burak Emirhttp://www.blogger.com/profile/01651698653168460072noreply@blogger.com0tag:blogger.com,1999:blog-28320851.post-78880108724901629842021-01-23T02:25:00.000-08:002021-01-23T02:25:04.243-08:00What are Commuting Conversions?The term "commuting conversions" in proof theory refers to a proof transformation of natural deduction proofs that involve sums (disjunction and existential formulae). The motivation for these comes from a desirable equivalence relation among natural deduction proofs that is lost when $\vee$ and $\exists$ are added. \(\newcommand{\orelim}[0]{\color{blue}{(\vee{\mathrm{\small ELIM}})}}\) Consider the following natural deduction rules $$ \cfrac{\begin{array}[cc]{} \\ {} \\ A \vee B \end{array} \quad \begin{array}[cc]{} [A] \\ ~~\vdots \\ \phantom{[}C\phantom{]} \end{array}\quad \begin{array}[cc]{} [B] \\ ~~\vdots \\ \phantom{[}C\phantom{]} \end{array} } {C}~\color{blue}{(\vee{\mathrm{\small ELIM}})} $$ $$ \cfrac{\begin{array}[cc]{} \\ {} \\ \exists x . (x)\end{array} \quad \begin{array}[c]{} [P(t)] \\ ~~\vdots \\ \phantom{[}C\phantom{]} \end{array}} {C}~\color{blue}{(\exists{\mathrm{\small ELIM}})} $$ When the formula $C$ is not the end of the proof, but there is another formula $W$ that is derived, then we get <em>multiple</em> proofs to derive the same thing. These multiple proofs of the same thing are all <m>normal</e> in the sense that there are no "detours" in the form of implication introduction and elimination. If we want a normal form to be <em>unique</em>, then we must necessarily pick one among these possible proof and designate it as "normal." This happens by adding additional conversions. $$ \cfrac{\begin{array}[cc]{} \\ {} \\ A \vee B \end{array} \quad \begin{array}[cc]{} [A] \\ ~~\vdots \\ \phantom{[}C\phantom{]} \end{array}\quad \begin{array}[cc]{} [B] \\ ~~\vdots \\ \phantom{[}C\phantom{]} \end{array} } {\cfrac{~C~}{~W~}} ~ \rightsquigarrow ~ % \cfrac{\begin{array}[cc]{} \\ {} \\ A \vee B \end{array} \quad \begin{array}[cc]{} [A] \\ ~~\vdots \\ \phantom{[}C\phantom{]} \\ \hline \phantom{[}W\phantom{]} \end{array}\quad \begin{array}[cc]{} [B] \\ ~~\vdots \\ \phantom{[}C\phantom{]} \\ \hline \phantom{[}W\phantom{]} \end{array} } {W}$$ $$ \cfrac{\begin{array}[cc] {} & [P(t)] \\ {} & \vdots \\ \exists x . P(x) & C \\ \end{array}}{\cfrac{~C~}{~W~}} ~ \rightsquigarrow ~ \cfrac{\begin{array}[cc] {} & [P(t)] \\ {} & \vdots \\ {} & C \\ \exists x . P(x) & \overline{~W~} \\ \end{array}}{W} $$ <h2>Pattern matching example</h2> Programmers may appreciate the following presentation of the same thing in Scala-like syntax. This: <blockquote> $\color{blue}{W}$($r$ match { case Left(a) => $\color{blue}{s}$; case Right(b) => $\color{blue}{t}$ }) </blockquote> is the same as this: <blockquote> $r$ match { case Left(a) => $\color{blue}{W(s)}$; case Right(b) => $\color{blue}{W(t)}$ }) </blockquote>Burak Emirhttp://www.blogger.com/profile/01651698653168460072noreply@blogger.com0tag:blogger.com,1999:blog-28320851.post-32366108710165056152020-12-26T02:59:00.009-08:002022-01-22T15:16:40.956-08:00Kripke semantics and Tableaux for Intuitionistic Logic<p>Simply-typed $\lambda$ calculus terms can be seen as encoding of <a href="https://blog.burakemir.ch/2020/05/intuitionistic-propositional-logic-and.html">natural deduction proofs of intuitionistic propositional logic</a> (IPC - the C is for calculus; IPL is also common). This is the Curry Howard correspondence. <p>So we can prove an IPC formula by finding a suitable term of simply-typed $\lambda$ calculus. Sounds great - but there are other ways to represent proofs and I wanted to understand better how structure of these natural deduction proofs relates to the structure of those other proofs. <!-- \(\newcommand{\qfalse}[0]{\color{blue} {\mathit{\#f}}}\) -->\(\newcommand{\qfalse}[0]{\color{blue} {\bot}}\) \(\newcommand{\qp}[0]{\color{blue} p}\) \(\newcommand{\qq}[0]{\color{blue} q}\) \(\newcommand{\qX}[0]{\color{blue} X}\) \(\newcommand{\qY}[0]{\color{blue} Y}\) \(\newcommand{\qZ}[0]{\color{blue} Z}\) \(\newcommand{\qand}[0]{\color{blue} \wedge}\) \(\newcommand{\qor}[0]{\color{blue} \vee}\) \(\newcommand{\qimplies}[0]{\rightarrow}\) \(\newcommand{\qneg}[1]{{\neg #1}}\) \(\newcommand{\andintro}[0]{\color{blue}{(\wedge{\mathrm{\small INTRO}})}}\) \(\newcommand{\andelim}[0]{\color{blue}{(\wedge{\mathrm{\small ELIM}})}}\) \(\newcommand{\andelimleft}[0]{\color{blue}{(\wedge{\mathrm{\small ELIM/1}})}}\) \(\newcommand{\andelimright}[0]{\color{blue}{(\wedge{\mathrm{\small ELIM/2}})}}\) \(\newcommand{\orintro}[0]{\color{blue}{(\vee{\mathrm{\small INTRO}})}}\) \(\newcommand{\orelim}[0]{\color{blue}{(\vee{\mathrm{\small ELIM}})}}\) \(\newcommand{\implintro}[0]{\color{blue}{(\rightarrow{\mathrm{\small INTRO}})}}\) \(\newcommand{\implelim}[0]{\color{blue}{(\rightarrow{\mathrm{\small ELIM}})}}\) \(\newcommand{\notelim}[0]{\color{blue}{(\neg{\mathrm{\small ELIM}})}}\) \(\newcommand{\botelim}[0]{\color{blue}{(\bot{\mathrm{\small ELIM}})}}\) <p>As an example, consider the formula $\qneg{\qneg{(\qp \qor \qneg{\qp)}}}$. We can take $\qneg{\qX}$ as abbreviation for $\qX \qimplies \bot$. Is it "obvious" how to find a term of type $((\qp \qor \qneg{\qp}) \qimplies \bot) \qimplies \bot$? Here is a natural deduction proof of (the names are from this list of <a href="https://blog.burakemir.ch/2020/05/intuitionistic-propositional-logic-and.html">intuitionistic natural deduction rules</a> from the post above; I can't be bothered with typesetting a tree). $$\begin{array}{lll} 1. & \qneg{(\qp \qor \qneg{\qp)}} & \mathrm{(assumption)} \\ 2. & \qp & \mathrm{(assumption)} \\ 3. & \qp \qor \qneg{\qp} & \mathrm{(via}~\orintro/1~\mathrm{from~2.}\\ 4. & \bot & \mathrm{(via}~\implelim~\mathrm{apply~1.~to~3.)} \\ 5. & \qneg{\qp} & \mathrm{(via}~\implintro~\mathrm{from~[2.]~to~4.)} \\ 6. & \qp \qor \qneg{\qp} & \mathrm{(via}~\orintro/2~\mathrm{from~5.)} \\ 7. & \bot & \mathrm{(via}~\implelim~\mathrm{apply~1.~to~6.)} \\ 8. & \qneg{\qneg{(\qp \qor \qneg{\qp)}}}& \mathrm{(via}~\implintro~\mathrm{from~[1.]~to~7.)} \\ \end{array} $$ <p>Maybe a better question to ask is: Why does this proof looks the way it looks? How, starting from the formula, can we arrive at such a proof, with steps that are so simple that we can leave them to a machine? <h1>Intuitionistic Tableaux</h1> <p>Kripke semantics and tableaux give us an alternative view of proving IPC formulae. The tableau calculus we discuss here is not very different from sequent calculus, it uses "block tableaux" where each node is set of (signed) formulae. It will be a refutation calculus - despite proving intuitionistic logic! <p>The definitions here follow Melvin Fitting's book [1] and Smullyan's unified presentation [2]. <p>In contrast to block tableaux, "analytic tableaux" have one formula per node and we consider the set of all formulas on a branch instead of looking at a node). These and connexion methods are out of scope today, just like modal logic or the algebraic view (denotational semantics). The only differences from the aforementioned texts is that instead of using negation as primitive, I will use $\qfalse$ as primitive, and I make it explicit when a principal $T \qX$ formulae has to be preserved in tableau extension. <p>[1] Melvin Fitting. Intuitionistic Logic, Model Theory and Forcing [<a href="https://www.researchgate.net/publication/243704264_Intuitionistic_Logic_Model_Theory_and_Forcing">researchgate pdf</a>]. <p>[2] Raymond M. Smullyan. A generalization of intuitionistic and modal logics. https://doi.org/10.1016/S0049-237X(08)71543-X <h3>Syntax for Intuitionistic and Minimal Propositional Logic</h3> <p>Once more, we define of well-formed formulae (wff): <ul><li>a propositional variable $\qp$ is a wff</li><li>$\qfalse$ is a wff</li><li>if $\qX$ and $\qY$ are wff, then $\qX \wedge \qY$ is a wff. <li>if $\qX$ and $\qY$ are wff, then $\qX \vee \qY$ is a wff. <li>if $\qX$ is a wff, then $\qX \qimplies \qY$ is a wff </ul> <p>We use $\qq,\qp$ for propositional variables and $\qX, \qY, \qZ$ as metavariables for formulae. The symbol $\qfalse$ stands for "falsehood", "falsity" or "absurd" and is also written as $\bot$. We use $\qfalse$ and $\qimplies$ as primitive (built-in) symbols, and define negation $\qneg{X}$ as an abbreviation for $\qX \qimplies \qfalse$. This goes well with the intuitionist notion that "a proof of $\qneg{X}$ amounts to a construction that transforms a proof $\qX$ to a proof of falsehood." <!-- <p>Programmers who have passing familiarity with the Curry-Howard-Lambek correspondence are going to be familiar with $\qfalse$ (a type without values, pretty much like <code>void</code>; an initial object in a bi-CCC); we will see how the handling of $\qfalse$ is the main difference between intuitionistic and minimal logic. --> <h3>Kripke semantics</h3> <p>A (Kripke) model for <i>intuitionistic</i> propositional logic is a triple $\langle \mathcal{G}, \mathcal{R}, \Vdash \rangle$ where $\mathcal{G}$ is a set of <i>states</i>, $\mathcal{R}$ an accessibility relation between states that is reflexive and transitive, and a relation $\Vdash$ between states and formulae that satisfies conditions $K_0 - K_3$ below. We use $\Gamma, \Delta$ for elements of $\mathcal{G}$, read $\Gamma \Vdash \qX$ as "$\Gamma$ forces $\qX$" and write $\Gamma^\star$ for "any $\Delta$ with $\Gamma \mathcal{R} \Delta$". <p>($K_0$) If $\Gamma \Vdash \qX$ then $\Gamma^\star \Vdash \qX$ <p>($K_1$) $\Gamma \Vdash \qX \qand \qY$ iff $\Gamma \Vdash \qX$ and $\Gamma \Vdash \qY$ <p>($K_2$) $\Gamma \Vdash \qX \qor \qY$ iff $\Gamma \Vdash \qX$ or $\Gamma \Vdash \qY$ <p>($K_3$) $\Gamma \Vdash \qX \qimplies \qY$ iff for every $\Gamma^\star$, either $\Gamma^\star \not\Vdash \qX$ or $\Gamma^\star \Vdash \qY$. <p>($K_4$) $\Gamma \not\Vdash \qfalse$. <p>A formula is <i>valid</i> in a model if $\Gamma \Vdash \qX$ for all $\Gamma \in \mathcal{G}$. A formula is <i>valid</i> if it is valid in all models. Note that with $K_0$ in mind, we could also have written conditions $K_1$ and $K_2$ differently: <p>($K_1'$) $\Gamma \Vdash \qX \qand \qY$ iff for every $\Gamma^\star$, $\Gamma^\star \Vdash \qX$ and $\Gamma^\star \Vdash \qY$ <p>($K_2'$) $\Gamma \Vdash \qX \qor \qY$ iff for every $\Gamma^\star$, $\Gamma^\star \Vdash \qX$ or $\Gamma^\star \Vdash \qY$ <p>In the above definition, it is understood that $\Vdash$ is "seeded" with a relation $\Vdash^{0}$ between states and propositional variables $\qp$. This $\Vdash^{0}$ is then extended to a relation between states and whole formulae. It is also common to define states directly as subsets of the "available" propositional variables (the ones forced in that state). Here is an example, with two states $\Gamma = \{\qq\}$ and $\Delta = \{\qp, \qq\}$: <div class="separator" style="clear: both; text-align: center;"><a href="https://1.bp.blogspot.com/-i2UIGY6VmmA/Xp2ZZ9mFVII/AAAAAAAAcyU/u0e5UkTkl2AaV2mDIo8rlimyMSqhFbkawCNcBGAsYHQ/s1600/kripke_1.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" src="https://1.bp.blogspot.com/-i2UIGY6VmmA/Xp2ZZ9mFVII/AAAAAAAAcyU/u0e5UkTkl2AaV2mDIo8rlimyMSqhFbkawCNcBGAsYHQ/s320/kripke_1.png" width="320" height="98" data-original-width="1080" data-original-height="332" /></a></div> <p>The relation $\Vdash$ takes all reachable states into consideration, and once a state forces $\Gamma \Vdash \qq$, any subsequent $\Gamma^\star$ must force it, too. The <i>"truth in a state"</i> is rather more dynamic than absolute "truth" we are used to from classical logic. One can give this a temporal epistemic interpretation and say states represent subjective <i>knowledge</i> of true facts, and that this knowledge "grows over time" (as we access subsequent states). <p>As an exercise: Convince yourself that $\Delta \Vdash \qp \qand \qq$, $\Gamma \not\Vdash \qp$ and the formula $\qp \qimplies \qq$ is valid in this model. Note further that $\qp \vee \qneg{p}$ is not valid. We have $\Gamma \not\Vdash \qp$, which is very different from saying $\Gamma \Vdash \qneg{p}$. <!--To see this, let us take a single propositional variable $p$ as our formula, and a little model $M$ based on states $\mathcal{G} = \{ \{\}, \{\qp\}\}$ with an accessibility relation that includes $\{\}\mathcal{R}\{\qp\}$ but not the other direction, and a "seed" relation $\Gamma \Vdash^0 p$ iff $p \in \Gamma$, properly extended to $\Vdash$. Now suppose $\{\} \Vdash \qp \qor \qneg{p}$:<ul> <li>Then either $\{\} \vdash \qp$ or $\{\} \vdash \qneg{p}$.<ul> <li>Try $\{\} \Vdash \qp$? However, $\{\} \not\Vdash^0 \qp$. <li>Try $\{\} \Vdash \qneg{p}$ instead? But then $\{\} \mathcal{R} \{\qp\}$ with $\{\qp\} \Vdash^0 \qp$ and certainly $\{\qp\} \not\Vdash \qfalse$. </ul></ul>--> <p>[When Kripke models are used for modal logics, people consider accessibility relation with different properties, leading to different modal logics; there is an explicit symbol $\square$ to talk about accessibility, and our reflexive and (in particular) transitive $\mathcal{R}$ corresponds to S4 modal logic, with the S4 axiom $\square \qp \rightarrow \square\square \qp$. We will just mention here that intuitionistic logic can be <i>interpreted</i> in S4 modal logic.] <!-- <p>The property for $\Gamma \Vdash \qX \qimplies \qY$ reminds us of application in simply-typed $\lambda$-calculus, which combines $\qX \qimplies \qY$ and $\qX$ into a $\qY$. Recall that in the correspondence, $\lambda$ calculus terms are an encoding of <i>proofs</i>; we haven't seen any proofs yet. -->\(\newcommand{\qT}[0]{T}\) \(\newcommand{\qF}[0]{F}\) \(\newcommand{\qS}[0]{\mathrm{S}}\) <h3>Kripke semantics for Signed Formulae</h3> On towards a proof system for IPC based on an intuitionistic tableau calculus (or just <i>intuitionistic tableaux</i>). To this end, we introduce <i>signed formulae</i>, using symbols $T$ and $F$ and writing $T\qX, F\qX$ for a signed version of formula $X$: <li>$\Gamma \vdash T\qX$ means $\Gamma \Vdash \qX$ and <li>$\Gamma \vdash F\qX$ means $\Gamma \not\Vdash \qX$. </ul> <p>In classical logic, $T\qX$ would mean $\qX$ is true and $F\qX$ would mean $\qX$ is false (or $\qneg{X}$ is true). The way to understand $\Gamma \vdash F\qX$ non-classically is to read it as "it is <b>not known</b> that $\Gamma \Vdash \qX$." Note that $\Gamma$ is merely a state, not a context or set of formulae - we are using this assertion as a building block to compare proof situations. For example, the assertions $T\qX \qand \qY$ and $F\qX$ cannot both be true at the same time (or in the same state). <p>We follow Smullyan's uniform notation, and classify our formulae into different types (whether they are "conjunctive" or "disjunctive" in nature):<ul><li> type $\alpha$ is any signed formula of the shape $T\qX \qand \qY, F\qX \qor \qY, F\qX \qimplies \qY, F\qfalse$ <li> type $\beta$ is any formula $F\qX \qand \qY, T\qX \qor \qY, T\qX \qimplies \qY, T\qfalse$. </ul> <p>Since we added $\qfalse$ as primitive, we need to say something about its components, too. We treat $\qfalse$ like we treat an empty disjunction.</p> <p>For a signed formula $\alpha (\beta)$ we called certain signed subformula its <i>components</i> $\alpha_1$ and $\alpha_2$ (same for $\beta$), according to this table: $$ \begin{array}{c|c|c}% \alpha & \alpha_1 & \alpha_2 \\ \hline T\qX \qand \qY & T\qX & T\qY \\ \hline F\qX \qor \qY & F\qX & F\qY \\ \hline F\qX \qimplies \qY & T\qX & F\qY \\ \hline F \qfalse & F \qfalse & F \qfalse \end{array} \quad \quad \begin{array}{c|c|c}% \beta & \beta_1 & \beta_2 \\ \hline F\qX \qand \qY & F\qX & F\qY \\ \hline T\qX \qor \qY & T\qX & T\qY \\ \hline T\qX \qimplies \qY & F\qX & T\qY \\ \hline T \qfalse & T \qfalse & T \qfalse \end{array} $$ <p>Again, for $\qfalse$ we have subformulae that are not proper, but this is not going to be a concern.</p> <p>Let us call the <i>conjugate</i> of $\overline{T\qX}$ the corresponding $F\qX$ and vice versa. Note that the conjugate of a formula of type $\alpha$ is of type $\beta$ and vice versa. In classical logic, a set of signed formulae $\{T\qX_1,\ldots,T\qX_n, F\qY_1, \ldots, \qY_m\}$ would correspond to a sequent calculus sequent $\qX_1, \ldots \qX_n \longrightarrow \qY_1, \ldots, \qY_m$. The sequent would be provable iff the set of signed formulae is unsatisfiable. The distinction into type $\alpha$ and $\beta$ explain how a formula can either be split into its components for conjunctive formulae $\alpha$ or the proof needs to branch into two possibilities for disjunctive formulae $\beta$. <p>For intuitionistic logic, we are going to call all formulae $T \qX$ <i>permanent</i> and all formulae $F \qX$ <i>co-permanent</i>. With all this setup, we can remix our definition of Kripke model and $\Vdash$ into the following conditions using <b>signed</b> formulae $\qX$: <p>$(U_0(a))$ Either $\Gamma \Vdash \qX$ or $\Gamma \Vdash \overline{\qX}$, but not both. <p>$(U_0(b))$ If $\Gamma \Vdash \qX$ and $\qX$ is <i>permanent</i>, then $\Gamma^\star \Vdash \qX$. <p>$(U_1)$ For any permanent $\alpha$, $\Gamma \Vdash \alpha$ iff for every $\Gamma^\star$, $\Gamma^\star \Vdash \alpha_1$ and $\Gamma^\star \Vdash \alpha_2$ <p>$(U_2)$ For any permanent $\beta$, $\Gamma \Vdash \beta$ iff for every $\Gamma^\star$, $\Gamma^\star \Vdash \beta_1$ or $\Gamma^\star \Vdash \beta_2$ <p>The behavior on co-permanent $\alpha$ and $\beta$ is determined by $U_1$ and $U_2$ too, by taking into consideration $(U_0(b))$. Observe: <p>$(U_1')$ For any co-permanent $\alpha$, $\Gamma \Vdash \alpha$ iff for some $\Gamma^\star$, $\Gamma^\star \Vdash \alpha_1$ and $\Gamma^\star \Vdash \alpha_2$ <p>$(U_2')$ For any co-permanent $\beta$, $\Gamma \Vdash \beta$ iff for some $\Gamma^\star$, $\Gamma^\star \Vdash \beta_1$ or $\Gamma^\star \Vdash \beta_2$ <p>As an exercise: Go back to the example above, and convince yourself that $\Delta \Vdash T\qp \qand \qq$ and $\Gamma \Vdash F\qp$. Check that the formula $T \qp \qimplies \qq$ is valid in every state, using the conditions $U_\star$ above. Now do the same for $T \qp \vee \qneg{p}$, remembering that it stands for $T \qp \vee (\qp \qimplies \qfalse)$. Now, to keep it interesting, what about $\Gamma \Vdash F \qp \vee \qneg{p}$? Is $F \qp \vee \qneg{p}$ valid? <p>A final definition is to call a formula <i>special</i> if it is co-permanent but has a permanent component, otherwise it is called <i>ordinary</i>. The only special formula we have is $F\qX \qimplies \qY$, of type $\alpha$. If we had negation as a built-in operator, that would be a second special $\alpha$. Check out Smullyan's paper for predicate logic and other generalizations that require special $\beta$. <!-- <h3>Proofs in Sequent Calculus</h3> We treat a set $\qS$ of signed formulae $\{T\qX_1,\ldots,T\qX_n, F\qY_1, \ldots, \qY_m\}$ as a sequent $\qX_1,\ldots,\qX_n \vdash \qY_1,\ldots,\qY_m$ and give rules of sequent calculus in uniform notation. In sequent calculus, the root is at the bottom and the proof tree grows upwards, with axioms (leafs) at the top. Note that while Gentzen in his famous paper defines an intuitionistic sequent calculus LJ by restricting the number of formulae on the right to at most one, we are instead defining a multiple-succedent intuitionistic sequent calculus. $\vert~S~\vert$ stands for the sequent of the set of signed formulae $S$. <p>Axioms: $\large \vert \qS, \qX, \overline{\qX}\vert \quad \vert S, \qT \qfalse ~\vert$ <p>Inference rules: $$ \begin{array}{ll} (A1) & \mathrm{for~ordinary} ~ \alpha: ~ \LARGE \frac{\vert \qS,~ \alpha_1,~ \alpha_2\vert }{\vert \qS,~ \alpha\vert } \\ (A2) & \mathrm{for~special} ~ \alpha: ~ \LARGE \frac{\vert \qS_P,~ \alpha_1,~ \alpha_2\vert }{\vert \qS,~ \alpha\vert } \\ (B1) & \mathrm{for~ordinary} ~ \beta: ~ \LARGE \frac{\vert \qS,~ \beta_1 \vert}{\vert \qS,~ \beta \vert } \quad \frac{\vert \qS,~ \beta_2\vert }{\vert \qS,~ \beta\vert } \\ \end{array} $$ <p>There is a subtlety here, in that we are using sets and thus blur the fact that a permanent formula below the line <b>must</b> to be repeated above the line (e.g. $S, \beta$, consider $\beta \in S$ if $\beta$ is permanent. For permanent formulae $\qT\qX$, this corresponds to contraction being available on the left in Gentzen's intuitionistic sequent calculus LJ. --> <h3>Tableau Proofs</h3> <p>A tableau proof is like a sequent calculus proof written upside-down: we draw it with the root at the top, growing towards the bottom. Instead of $S \cup \qX$, we write $S, \qX$. A vertical bar $\vert$ indicates that there are two children. $\qS_T$ means $\{ \qT\qX \vert \qT\qX \in \qS\}$, and $\alpha_P$ means repeat $\alpha$ if it is permanent (same for $\beta$). This is just to emphasize that $T \qX$ (possibly) needs be to preserved; in sequent calculus, this would surface as a structural operation. $$ \begin{array}{ll} (A1) & \mathrm{for~ordinary} ~ \alpha: ~ \LARGE \frac{\qS,~ \alpha }{\qS,~ \alpha_1,~ \alpha_2 } \\ (A2) & \mathrm{for~special} ~ \alpha: ~ \LARGE \frac{\qS,~ \alpha }{\qS_P,~ \alpha_1,~ \alpha_2 } \\ (B1) & \mathrm{for~ordinary} ~ \beta: ~ \LARGE \frac{\qS,~ \beta }{\qS,~ \beta_1 ~ \vert ~ \qS, ~\beta_2 } \\ \end{array} $$ <!-- $$\frac{\qS, \qT\qX \qand \qY}{\qS, \qT \qX, \qT \qY} \quad \frac{\qS, \qF\qX \qand \qY}{\qS, \qF \qX ~\vert~ \qS, \qF \qY} $$ $$\frac{\qS, \qT\qX \qor \qY}{\qS, \qT \qX ~\vert~ S, \qT \qY} \quad \frac{\qS, \qF\qX \qor \qY}{\qS, \qF \qX, \qF \qY} $$ $$\frac{\qS, \qT\qX \qimplies \qY}{\qS, \qF \qX ~\vert~ S, \qT \qY} \quad \frac{\qS, \qF\qX \qimplies \qY}{\qS_T, \qT \qX, \qF \qY} $$ $$\frac{\qS, \qF\qfalse}{?} $$ --> <p>Now, a <i>configuration</i> is a collection $[~\qS_1, \qS_2, \ldots, \qS_n~]$ of sets of signed formulae. A rule takes a configuration and replaces one $S_i$ with its result (or its two results). A <i>tableau</i> is a finite sequence of configurations $C_1, \ldots C_n$ in which each configuration except the first resulted from its predecessor by application of a rule. <p>A set of signed formulae is <i>closed</i> if it contains both $\qT \qX$ and $\qF \qX$ for some formula $\qX$ or if it contains $\qT \qfalse$. A configuration $[~\qS_1, \qS_2, \ldots, \qS_n~]$ is closed if <b>all</b> sets $\qS_i$ are closed. A tableau $C_1, \ldots C_n$ is closed if some $C_i$ is closed. <p>A tableau for a signed set of formula $\qS$ is a tableau $C_1, \ldots, C_n$ with $C_1 = \{\qS\}$. A finite set of of signed formulae $\qS$ is <i>inconsistent</i> if some tableau for $S$ is closed. Otherwise, $\qS$ is <i>consistent</i>. A formula $\qX$ is a <i>theorem</i>, written $\vdash_I \qX$, if $\{\qF \qX\}$ is inconsistent, and a closed tableau for $\{\qF\qX\}$ is called a proof of $\qX$. <p>What we have done here is to define provability of $X$ as inconsistency of $FX$. This refutation calculus may not be appealing to an intuitionist. <p>Let's look at a proof of $\qneg{\qneg{(\qX \vee \qneg{X})}}$ in our $\qfalse$-primitive logic. $$ \begin{array}{l} [~\{\qF \qneg{\qneg{(\qX \vee \qneg{\qX})})} ] \\ [~\{\qT \qneg{(\qX \vee \qneg{\qX}})\} ] \\ [~\{\qT \qneg{(\qX \vee \qneg{\qX}}), \qF \qX \vee \qneg{\qX}\} ] \\ [~\{\qT \qneg{(\qX \vee \qneg{\qX}}), \qF \qX, \qF \qneg{\qX}\} ] \\ [~\{\qT \qneg{(\qX \vee \qneg{\qX}}), \qT \qX\} ] \\ [~\{\qT \qneg{(\qX \vee \qneg{\qX}}), \qF \qX \vee (\qneg{\qX}), \qT \qX\} ] \\ [~\{\qT \qneg{(\qX \vee \qneg{\qX}}), \qF \qX, \qF \qneg{\qX}, \qT \qX\} ] \\ \end{array} $$ The same proof in a bit more verbose: $$ \begin{array}{l} [~\{\qF ((\qX \vee (\qX \qimplies \qfalse)) \qimplies \qfalse) \qimplies \qfalse\}~] \\ \alpha (F\qimplies) \\ [~\{\qT ((\qX \vee (\qX \qimplies \qfalse)) \qimplies \qfalse, \qF \bot)\}~] \\ \beta (T\qimplies) \\ [~\{\qT ((\qX \vee (\qX \qimplies \qfalse)) \qimplies \qfalse), \qF (\qX \vee (\qX \qimplies \qfalse)), \qF \qfalse\}~, \{\ldots, \qT \qfalse\}] \\ \alpha (F\qor)\\ [~\{\qT ((\qX \vee (\qX \qimplies \qfalse)) \qimplies \qfalse), \qF \qX, \qF (\qX \qimplies \qfalse), \qF \qfalse\}~, \{\ldots, \qT \qfalse\}] \\ \alpha (F\qimplies)\\ [~\{\qT ((\qX \vee (\qX \qimplies \qfalse)) \qimplies \qfalse), \qT \qX, \qF \qfalse\}, \{\ldots, \qT \qfalse\}] \\ \beta (T\qimplies)\\ [~\{\qT ((\qX \vee (\qX \qimplies \qfalse)) \qimplies \qfalse), \qF (\qX \vee (\qX \qimplies \qfalse)), \qT \qX, \qF \qfalse \}, \\ ~ \{\ldots, \qT \qfalse\}, \{\ldots, \qT \qfalse\}] \\ \alpha (F\qor)\\ [~\{\qT ((\qX \vee (\qX \qimplies \qfalse)) \qimplies \qfalse), \qF \qX, \qF (\qX \qimplies \qfalse), \qT \qX, \qF \qfalse \}, \\ ~ \{\ldots, \qT \qfalse\}, \{\ldots, \qT \qfalse\}] \\ \end{array} $$ <h4>What is convenient about this?</h4> <p>In applying the rules of the tableau calculus, we are taking the formula we want to prove apart and deal with its subformulae until we close a branch (configuration). However, it is not an entirely deterministic procedure, in the sense that we still need to pick which of the formulas we want to take apart, and if we pick the wrong one, we may have to backtrack. Automated reasoning! And it should be easy to convert the result into a natural deduction proof (similar to Prawitz's view that a sequent calculus proof can be seen as a description of how to construct a natural deduction proof). <p>Another thing that is great about the tableaux method is that if after taking everything apart, you end up with open branches (that cannot be closed), you can read them as models that provide counter-examples. The application of a "special" rule indicates that we transitioned to another state. <h4>What is special about intuitionistic tableaux?</h4> <p>Let's consider that signed formula $\qF \qX$ correspond to the right-hand side of a sequent calculus (the succedent). So unlike Gentzen's LJ, it is quite fine to have multiple formulae mingle on the right-hand side and stay intuitionistic, as long as the right-$\rightarrow$ rule (i.e. $\qF \qX \qimplies \qY$) keeps only one. This has also been explained here: <p>[3] Valeria de Paiva, Luiz Carlos Pereira. A short note on intuitionistic propositional logic with multiple conclusions [<a href="https://www.researchgate.net/publication/228619994_A_short_note_on_intuitionistic_propositional_logic_with_multiple_conclusions">researchgate pdf</a>] <p>The assertion $\qF \qX \qimplies \qY$ says: "at present, we think we can prove $\qX$ without at the same time proving $\qY$". In considering such a possible proof $\qT \qX$, we need to do bookkeeping to ensure that anything we prove along the way is separate from other formulae $\qF \qZ$ that we labeled as not-yet-known. This is why in the special $\alpha$ rule we use $\qS_P$ instead of $\qS$. This looks like the essence of intuitionistic reasoning as far as proof rules are concerned. It also makes intuitionistic tableaux a bit more complicated than classical. <h4>Coda</h4> <p>I hope the above sheds some light on the connection between tableaux calculus, Kripke semantics and intuitionistic logic. Soundness and completeness can be found in Fitting's book, as well as generalizations to modal logic, predicate logic. Of course, for stronger logics, carrying over soundness and completeness does not mean we retain decidability; stronger logics require not only taking formulae apart, but also instantiating them, which increases non-determinism and search space. (Slightly updated 2022-01-22, for clarity.) Burak Emirhttp://www.blogger.com/profile/01651698653168460072noreply@blogger.com0tag:blogger.com,1999:blog-28320851.post-27534520026951513202020-09-28T01:35:00.001-07:002020-09-28T04:53:18.794-07:00Intuitionistic Higher-Order Logic and Equality <h1>Intro</h1> <p> I have posted plenty of motivation for considering equality and higher-order logic elsewhere on this blog by now. Let me finally get to the intuitionistic version of higher-order logic, since I <a href="https://blog.burakemir.ch/2020/04/higher-order-logic-and-equality.html" >talked about the classical version previously</a> and then also talked at length about <a href="https://blog.burakemir.ch/2020/05/intuitionistic-propositional-logic-and.html" >"Intuitionistic Logic and Natural Deduction"</a > without really getting closer to the matter. Intuitionistic higher-order logic is what let's you talk about an elementary topos, and using equality to characterize the logical connectives gives an insightful perspective on those commutative diagrams for people who are familiar with $\lambda$ calculus. \( \newcommand{\qeq}[0]{\color{blue} {\mathrm{eq}}} \newcommand{\qSigma}[0]{\color{blue} \Sigma} \newcommand{\qPi}[0]{\color{blue} \Pi} \newcommand{\qor}[0]{\color{blue} {\mathrm{or}}} \newcommand{\tprop}[0]{\omicron} \newcommand{\qtrue}[0]{{\color{blue} {\mathrm{true}}}} \newcommand{\qand}[0]{\color{blue} {\mathrm{and}}} \newcommand{\qimplies}[0]{\color{blue} {\mathrm{implies}}} \newcommand{\qPi}[0]{\color{blue} \Pi} \newcommand{\qfalse}[0]{{\color{blue} {\mathrm{false}}}} \newcommand{\qnot}[0]{{\color{blue} {\mathrm{not}}}} \) </p> <p>I am going to refer to material in <a href="https://blog.burakemir.ch/2020/04/higher-order-logic-and-equality.html" >"Higher order logic and equality"</a> often. Like in that post, there is actually no original material here; I am just gathering definitions that is probably well-known to some grad students and experts (yet nobody calls it standard since neither higher-order logic nor intuitionistic logic are known very well outside certain circles who deal with formal logic and proof assistants and the like). This is my feeble attempt at an exposition. <h1>The Logic of a Topos</h1> <p> We take a $\lambda$ calculus with surjective pairing and a type $\omicron$ which we take as a substitute for "truth values" (topos theory texts use $\Omega$ but we want easy comparison with the classical version linked above, and we spell out $\qPi \lambda x. t$ without setting up an abbreviation $\forall x. t$ and we also spell out $\qSigma \lambda x. t$ without setting up an abbreviation $\exists x. t$). We want to set this up in an intuitionistic manner, which means doing away with the idea that $\omicron$ stands for two truth values, and not using classical logical equivalences to define $\qor$ and $\qSigma$. The terms would then correspond to a particular cartesian-closed category, the <a href="https://ncatlab.org/nlab/show/free+topos">free topos</a>. </p> <p> We again want to define everything in terms of equality $\qeq_\alpha: \alpha \rightarrow \alpha \rightarrow \omicron$. What is this equality doing? In our langugage, it talks about equality of <i>terms</i>, and in categorical logic, terms correspond to <i>arrows</i> and thus equality should talk about equality of arrows. (There is some detail hidden here as our language will also let us talk about "global elements" $1 \stackrel{c}{\longrightarrow} C$ which are written as constant terms $c: C$). We take the following elements of our logical language: $$ \begin{align*} \qtrue:\ & \omicron\\ \qand:\ & \omicron \rightarrow \omicron \rightarrow \omicron\\ \qimplies:\ & \omicron \rightarrow \omicron \rightarrow \omicron\\ \qPi_\alpha:\ & (\alpha \rightarrow \omicron) \rightarrow \omicron\\ \end{align*} $$ </p> <p> ... and <i>define</i> them using the only primitive $\qeq$: $$ \begin{align*} \qtrue &:= \qeq_{\tprop \rightarrow \tprop \rightarrow \tprop} \qeq_\tprop \qeq_\tprop\\ \qand &:= \lambda x:\tprop. \lambda y:\tprop. \\ &\quad \quad \qeq_{\tprop \rightarrow \tprop \rightarrow \tprop} \\ &\quad \quad \quad (\lambda z: \tprop \rightarrow \tprop \rightarrow \tprop. z~\qtrue~\qtrue) \\ &\quad \quad \quad (\lambda z: \tprop \rightarrow \tprop \rightarrow \tprop. z~x~y) \\ \qimplies &:= \lambda x:\tprop. \lambda y:\tprop. \qeq~x~(\qand~x~y)\\ \qPi_\alpha &:= \qeq_{(\alpha \rightarrow \tprop) \rightarrow (\alpha \rightarrow \tprop) \rightarrow \tprop}~(\lambda x : \alpha. \qtrue) \end{align*} $$ </p> <p> These definitions are the same as for classical logic. An intro to topos theory will typically describe $\qand$ and $\qimplies$ using commutative diagrams, but here we are defining the language first. Moreover, the language has only one primitive $\qeq$, though that primitive is instantiated at different types. </p> <h1>False</h1> <p> Now we look at $\qfalse$, which is also defined the same as in classical case: $$ \begin{align*} \qfalse &: \omicron\\ \qfalse &:= \qeq_{\tprop \rightarrow \tprop} (\lambda x:\tprop. \qtrue) (\lambda x:\tprop. x)\\ \end{align*} $$ </p> <p> In a topos theory text, one encounters $\forall v:\omicron. v$, which is shorthand for $\qPi\lambda v:\omicron. v$ and actually expands to the very same definition above. The definition that mentions $\qeq$ explicitly has the advantage that it explains what is going on: it morally says "if there is anything other than $\qtrue$ in $\omicron$, then $\lambda x:\omicron. \qtrue$ and $\mathsf{id}_\omicron$ are not the same." So we have a value $\qfalse$, but it need to be the only thing distinct from $\qtrue$ in $\omicron$. </p> <h1>Not, Or and Exists</h1> <p> On to intuitionistic negation $\qnot$. In intuitionistic logic, we would define negation like this: $$ \begin{align*} \qnot &: \omicron \rightarrow \omicron\\ \qnot &:= \lambda x: \tprop. \qimplies(x, \qfalse) \end{align*} $$ </p> <p> If we consult the classical version $\qnot := \qeq_{\tprop \rightarrow \tprop} \qfalse$, we again find that this is not that different. </p> <p> Finally, let's look at $\qor$ and $\qSigma$. We define these using $\qPi$: $$ \begin{align*} \qor &: \omicron \rightarrow \omicron \rightarrow \omicron\\ \qor &:= \lambda x: \omicron. \lambda y: \omicron. \qPi \lambda r:\omicron. \\ & \quad \quad \qimplies(\qand(\qimplies(x, r), \qimplies(y,r)), r) \\ \\ \qSigma_\alpha &: (\alpha \rightarrow \omicron) \rightarrow \omicron \\ \qSigma_\alpha &:= \lambda x: \alpha \rightarrow \omicron. \qPi \lambda r:\omicron. \\ &\quad \quad (\qPi \lambda z:\alpha. \qimplies(\qimplies(x z, r), vr) \end{align*} $$ </p> <p> We can get an intuitive understanding of these definitions: for $\qor$, if we have a way of obtaining an element $r$ (result) from $x$ and one of obtaining the same from $y$, then also from $(\qor~x~y)$. Similarly for $\qSigma$, if we have a way of obtaining any result $r$ from the "instantiated" predicate $x z$, then also from $\qSigma x$. </p> <p>The definition of $\qor$ in terms of $\qand$ and $\qPi$ is not very different from natural deduction rule for "or elimination": $$ \cfrac{\begin{array}[cc]{} \\ {} \\ A \vee B \end{array} \quad \begin{array}[cc]{} [A] \\ ~~\vdots \\ \phantom{[}C\phantom{]} \end{array}\quad \begin{array}[cc]{} [B] \\ ~~\vdots \\ \phantom{[}C\phantom{]} \end{array} } {C}~\color{blue}{(\vee{\mathrm{\small ELIM}})} $$ <p>Checking the the natural deduction rule for existential elimination and comparing it with the encoding above is left as an exercise.</p> <h1>The end of the beginning</h1> <p>Higher order logic seems to deserve more attention. We have set up a logical system based on $\lambda$ calculus without making use of the Curry Howard correspondence. While the previous discussion focused on classical higher order logic as introduced by Church in 1940, in this post we looked at intuitionistic higher order logic. The interest in the latter is at least very heavily influenced by topos theory. <p>It is slightly confusing that when discussing elementary toposes, one talks of first-order logic when in reality the logic is inherently higher-order (because a topos is cartesian closed). I guess the charitable perspective is what people mean is "you can do everything that you would want to do in formal set theory using first-order logic." <p> The Lambek and Scott book "Introduction to higher order categorical logic" dwells a bit on the fact that it is always possible to get a formulation of the internal language of a topos using equality. Andrews $Q_0$ develops simple type theory / higher order logic using a single equality operator $Q$. Various theorem proving systems have been based on this (TPS and HOL). <p>One could say that the way higher-order logic admits the definition of logical connectives in terms of other is an "encoding" of natural deduction rules. Could one use this idea to build a logical framework, a representation of derivations? Indeed this is what is underlying <a href="https://www.lri.fr/~wolff/tutorials/2014-LRI-isabelle-tutorial/pure.pdf">Isabelle/Pure framework</a> and <a href="http://www.lix.polytechnique.fr/Labo/Dale.Miller/lProlog/">$\lambda$Prolog</a>. <p>It would seem worthwhile to write a proper book that takes a logical perspective on the $\lambda$ calculus, developing higher order logic as language, category theory as semantics and arriving at elementary toposes as models. It would probably feel very elementary and pedestrian in the academic world, but I do think it would be useful. It feels to me that there is so much "background knowledge" to gather the relevant theory on logic and computation that most practitioners will not have the energy (and this says something about the potential for adopting of formal methods). Until such a book arrives (I'm not sure if <i>I</i> would have the energy), I hope my out-of-order blog posts provide some help on the reader's journey. There are certainly a lot of of connections between logic, language, types and semantics that are left to be explored, besides Curry Howard and dependent types. Burak Emirhttp://www.blogger.com/profile/01651698653168460072noreply@blogger.com0tag:blogger.com,1999:blog-28320851.post-25339945965845296172020-09-27T19:37:00.000-07:002020-09-27T19:37:36.386-07:00Programmers love Higher-Order Logic<p> My <a href="https://blog.burakemir.ch/2020/04/higher-order-logic-and-equality.html" >previous take on higher-order logic</a > was classical, channeling Church's simple theory of types and Andrews $Q_0$. In that last post, I was hinting at motivation. So I want to expand a bit now how this stuff can be useful for building software at large. I am posting some ramblings here. </p> <p></p> <h1>Equality is Fundamental</h1> <p> There are two different "levels" at which a programmer deals with some kind of reasoning that involves equality: </p> <ul> <li> while writing code: need to be able to tell when two objects are the same. </li> <li> when designing or arguing that code is correct: reasoning whether behavior like output values conform to a (however implicit) specification </li></ul> <p> In both cases, one needs to be able to identify whether objects (or behaviors or input-output pairs) are "the same" as expected ones. But the latter activity, reasoning about program correctness, happens in some form of logic. Much that has been written and is being written on formal logic is about mathematics and mathematical convention. This is true whether we talk about semantics of programming languages or program specification. </p> <p> But for programmers, there are benefits to knowing formal reasoning even if one is not interested in the foundations of math, axiomatizations of set theory etc. It is no secret that in mathematics as in programming, you run into situations where there are multiple different ways to represent some data, and one needs to chose or translate between those representations. While programmers resign to writing "proto-to-proto" translation over and over (which often are not as trivial as it seems), in mathematics it is customary to leave out the awkward details, introduce "abuse of notiation" and quotienting to keep arguments informal, but concise. </p> <blockquote> The heart and soul of much mathematics consists of the fact that the “same” object can be presented to us in different ways. [1] </blockquote> <p> [1] Barry Mazur. <a href="http://people.math.harvard.edu/~mazur/preprints/when_is_one.pdf" >When is one thing equal to some other thing? </a></p> <h1>Equality in First Order Logic</h1> <p> We take a leap towards "formal systems", where objects are represented using (strings of) symbols, along with strict rules to form statements and derive new true statements from existing ones. </p> <p> A typical definition of a first-order logic language contains constant and function symbols, predicate symbols, logical operators. </p> <p> There is a "design choice" here whether equality is considered as <i>built-in</i>, magically denoting exactly the equality in the underlying sets, or whether equality is a predicate with certain special properties. <a href="https://en.wikipedia.org/wiki/First-order_logic#Equality_and_its_axioms" >Wikipedia"</a > talks about "first-order logic with equality" (when built-in) and first-order logic without equality (when it is not). <!-- <p>A not very obvious "foundational fact" is that when people point at axiom systems like ZFC (or NGB or Ackermann set theory), there is always hidden a commitment towards first-order logic <b>with</b> equality. It does not make sense otherwise. The mindset that is at least encouraged by such a view is that there is some universe of sets and mathematical activity reveals its properties.</p>--></p> <p> When equality does not come built-in, one needs to add axioms. The equality axioms are special, in the sense that they have to be preserved by all the function and relation symbols. This looks like: <ul> <li>$\mathsf{eq}$ is reflexitve, symmetric, transitive</li> <li>if $\mathsf{eq}(x_1, y_1), \ldots, \mathsf{eq}(x_n, y_n)$, then $\mathsf{eq}(f(x_1,\ldots,x_n),f(y_1,\ldots,y_n)$ for all function symbols $f$ of arity $n$ <li>if $\mathsf{eq}(x_1, y_1), \ldots, \mathsf{eq}(x_n, y_n)$, then $R(x_1,\ldots,x_n) \Leftrightarrow R(y_1,\ldots,y_n)$ for all relation symbols $R$ of arity $n$ </ul> <p>With a "defined" equality, is it is well possible to interpret logical statements involving $\mathsf{eq}$ in such a way that two distinct elements are regarded as equal according to $\mathsf{eq}$. <h1>Coding our own equality</h1> <p> When we are talking about practical software development, there is really no hope of defining a platonic ideals; yet when techniques of reasoning, informaticians are in various ways confined to results and methods that formalizing mathematics. </p> <p> Take this example: We'd like our terms to represent free monoid on $X$, so we'd like these terms to satisfy the following equations (or identities): $$s \cdot 1 = s$$ $$t = 1 \cdot t$$ $$(s \cdot t) \cdot u = s \cdot (t \cdot u)$$ <p> Consider definitions in some fictional language to express the "interface" of a monoid: $$ \begin{array}{ll} \mathsf{interface}~ M[X] \\ \quad \mathsf{unit}(): M[X]\\ \quad \mathsf{emb}(x: X): M[X]\\ \quad \mathsf{dot}(s: M[X], t: M[X]): M[X]\\ \quad \mathsf{eq}(s: M[X], t: M[X]): \mathsf{bool} \end{array}$$ </p> <p> Here, $\mathsf{unit}$ stands for $1$, $\mathsf{emb}(x)$ is for embedding $x$ and $\mathsf{dot}$ is the combine operaion in $MX$. If our fictional language has algebraic data types, we can use those to build <i>terms</i> which represent the algebraic theory of monoids: $$ \begin{array}{ll} \mathsf{data}~ \mathsf{Mon}[X] \\ \quad \mathsf{Unit}(): \mathsf{Mon}[X]\\ \quad \mathsf{Emb}(x: X): \mathsf{Mon}[X]\\ \quad \mathsf{Dot}(s: \mathsf{Mon}[X], t: \mathsf{Mon}[X]): \mathsf{Mon}[X]\\ \quad \mathsf{eqMon}(s: \mathsf{Mon}[X], t: \mathsf{Mon}[X]): \mathsf{bool} \end{array}$$ </p> <p> It is now really obvious what these constructors stand for, but we note that while the equality $\mathsf{eqMon}$ returns true whenever two terms are equal when considered as elements of the free monoid, there is at least one more equality relation on terms that is different ("structural equality" or "syntactic equality", which will consider $\mathsf{dot}(\mathsf{dot}(s, t), u)$ as distinct from $\mathsf{dot}(s, \mathsf{dot}(t, u))$). </p> <p> Mathematically speaking, we want to consider the quotient set $\mathsf{Mon}[X] / \mathsf{eqMon}$, and work with equivalence classes. </p> <h1>Formulas = Functions to Truth Values</h1> <p> After all this talk about equality in algebraic theories, we want to turn equality in the context of logic, specifically higher-order logic. </p> <p> In classical logic, a proposition (logical statement) stands for a truth value, and there are exactly two of then (the two-value type "bool"). Boole was the first to associate propositional connectives with algebraic operations. In this perspective, the the truth values are part of the language, even though this was later abandoned as in Tarski-style semantics. </p> <p> Programmers are quite used to having functions return $\mathsf{bool}$. In terms of mathematical logic, our custom-made equality "relation" $\mathsf{eqMon}$ above is in fact a function to the set of truth values. Could we not do away with relations and talk in terms of functions? </p> <p> Yes, we can; using truth values and functions in a formal language of propositions is the essential summary of classical higher order logic (or Church's simple theory of types. There is a more compelling example: if you consider "set builder notation" $$ \{ x \in A \ |\ \phi(x) \} $$ then this describes a subset of $A$. We can choose to describe $\phi$ as defining a (unary) relation, but viewing it as a function to $\mathsf{bool}$ is no doubt more natural in a context like <code>SELECT x FROM A WHERE </code> $\phi$ where "relations" are given by their extensions (as tables) and computation in the form of expressions is available. </p> <p> The point of (classical) higher-order logic is that using the $\lambda$ calculus and truth values, we have everything we need to define the meaning of all logical connectives, and the types prevent logical paradoxes of "naively-untyped set theory". Since the truth values are part of the language, it is possible to define the meaning purely through definitions that involve equality. <p>More on this and classical higher order logic in that last post on <a href="https://blog.burakemir.ch/2020/04/higher-order-logic-and-equality.html" >Higher Order Logic and Equality</a > and the next one I'm going to write in a second, where I talk about the intuitionistic version. </p> <p> I will just end with a note that there is no distinction between "predicate", "relation" and "set" is higher-order logic, and this is a good thing. An expression of type $\iota \rightarrow \omicron$ (where $\iota$ are the individuals and $\omicron$ the truth values) defines a (sub)set of individuals, a predicate on the set of individuals, a unary relation, all at the same time. $$\mathsf{even} := \lambda x:\iota. \Sigma \lambda y:\iota. \mathsf{eq}(x, \mathsf{times}(2, y))$$ $$\mathsf{greater} := \lambda x:\iota.\lambda y:\iota. \Sigma \lambda z:\iota. \mathsf{eq}(x, \mathsf{plus}(y,z))$$ $$\mathsf{lowerbound} := \lambda x:\iota.\lambda y:\iota \rightarrow \omicron. \Pi \lambda z:\iota. \mathsf{implies}(y(z), \mathsf{greater}(z, x))$$ <p> The confusing bit for programmers may be that this thing is called simple type theory when in fact there is only one "sort" of individuals. This makes more sense when considering that it is no problem to come up with multiple "sorts" of individuals but that wouldn't give sets ($\iota \rightarrow \omicron$), sets of sets $(\iota \rightarrow \omicron) \rightarrow \omicron$, sets of sets of sets (and so on) of individuals. It's this distinction of hierarchies of sets/predicates/relations that earn Church's higher order logic the name "simple type theory": only terms with the right types are considered well-formed and meaningful.Burak Emirhttp://www.blogger.com/profile/01651698653168460072noreply@blogger.com0tag:blogger.com,1999:blog-28320851.post-89384527990166060172020-07-19T14:43:00.003-07:002020-07-19T15:19:16.756-07:00Modules, components and their limitsI read <a href="http://www.sheshbabu.com/posts/rust-module-system/">Clear explanation of Rust's module system</a>. The TL;DR the module system in rust is a way to (somewhat) break/hide the link between what things are defined and what source files the things are defined in. A rust module provides control over visibility across source files and makes explicit how references reach across "modules". <p>The post starts with a request to ignore what "module" means in other languages, and this plunged me into thoughts on all the things that are called modules. It is somewhat sad but to be expected that a basic word like "module" (or "package") could never have a standard meaning, despite all these programming languages supposedly built while we have all this PL theory. <p>The more academic vocabulary has "module" as a language concept that comes with a scope and visibility, a way to bundle types and named values. It's somehwat independent of installation, deployment (the only touch point being <a href="https://courses.cs.washington.edu/courses/cse341/04wi/lectures/09-ml-modules.html">support for separate compilation</a>), but - to my knowledge - there is no talk of an "installed-at-a-version" module. So the rust module concept is pretty consistent with that. Arranging a set of modules that fit together is the programmer's problem, not the researcher's. <p>Alas, there is this thing that happens at the level of Rust crates/npm packages, and it is really a different problem. Nevertheless, those things are also called modules in some languages (C++20 modules, Java modules, go modules). <p>In Java, source files in the same directory make a Java <i>package</i> and they come with a way to control visibility. The association between files and packages is rigid. More recently, <a href="http://tutorials.jenkov.com/java/modules.html">Java modules</a> were added to express dependencies between modules, visibility of packages - something like a "packages of packages", but really more on the level of Rust crates. <p>I am reminded of "components" (if anybody remembers component-oriented programming) because deployment is part of this game - the notion of dependencies only makes sense in the context of some installation. But then components used to be about runtime services (in addition to code), object brokers and such and here we would actually like to talk about less dynamic things, just code and functionality in the form of source and/or libraries. Still, components are things that get assembled and exist at versions, so I'd say it's an appropriate, if old-fashioned name. <p>Now, such a (static) component would abstractly describe what npm packages or Rust crates achieve, and also form of Debian development packages (sigh) where you install a C library in binary form and its header files. That gives us even more of a twist now, which is that <a href="https://docs.microsoft.com/en-us/cpp/cpp/modules-cpp?view=vs-2019">C++ 20 modules</a> are really modules in the sense of achieving grouping for visibility and scope (check) but at the same time that whole notion has a huge impact on C++ since it should actually make it easier to deploy C++ components. A relevant quote from the C++20 link above: <blockquote>After a module is compiled once, the results are stored in a binary file that describes all the exported types, functions and templates. That file can be processed much faster than a header file, and can be reused by the compiler every place where the module is imported in a project.</blockquote> <p>So while a C++20 module is language concept, it is foreseeable that one day all self-respecting newly developed libraries will be deployed as (<i>installed at a version</i>) modules. Oh well. There was really no reason to hope that a standard meaning for "modules" and the concept "above" could be justified in any way, across languages. At least, there remains an insight that it really makes sense to deal with the two levels separately. <p>I sure wish we could all call the installed-at-a-version thing "component", and the language concept "modules". Just two paragraphs on how that might sound:</p> <p>Every language needs a module system: a way to deal with grouping, interfaces (in terms of making visible - "exporting" only the interface and keeping the rest internal). Including files was always bad and indicates the lack of a module system. It's when one needs to deal with independently developed, deployed and release units that there comes the problem of dependencies, evolution and versions, and these things are the components. Components are <i>related</i> to modules in that they need to contain at least one of them for talking about the things that are provided and brought in scope. <p>If there is a dependency between versions of components and there are backwards incompatible (!) changes in the depended-upon component (or a <a href="https://www.theregister.com/2016/03/23/npm_left_pad_chaos/">left-pad situation</a>), then this is something that needs to be dealt through changes to the depending component. That isn't something that module systems (or any other language design) can help much with. Burak Emirhttp://www.blogger.com/profile/01651698653168460072noreply@blogger.com0tag:blogger.com,1999:blog-28320851.post-55211794480192506202020-07-18T10:07:00.002-07:002020-07-18T10:07:52.931-07:00Relating partial evaluation, multi-stage programming and macros<p>Here is some quick fact on two metaprogramming techniques. $$ \begin{array}{l} \mathsf{letrec}\ \mathtt{pow}\ =\ \lambda n : \mathsf{Int}. \lambda x : \mathsf{Int}. \\ \quad \mathsf{if}\ (\mathsf{eqInt}\ n\ 0)\ 1\ (x\ * (\mathtt{pow}\ (n - 1)\ x)) \end{array} $$ <p>What can we do with an expression $\mathtt{pow}\ 3$? <h2>Partial evaluation</h2> <p>The "partial evaluation" answer is to inline the definition, and evaluate what can be evaluated, yielding a <i>residual program</i>. $$ \begin{array}{l} \lambda x : \mathsf{Int}. \mathsf{if}\ (\mathsf{eqInt}\ 3\ 0)\ 1\ (x * (\mathtt{pow}\ (3 - 1)\ x)\\ \lambda x : \mathsf{Int}. x * (\mathtt{pow}\ 2\ x)\\ \ldots\\ \lambda x : \mathsf{Int}. x * (x * (x * 1)) \end{array} $$ <h2>Staging</h2> <p>The multi-stage programming answer (or <i>staging</i>) is to rethink $\mathtt{pow}$ as a transformation of code. $$ \newcommand{\qquote}[1]{\color{blue}{[|}#1\color{blue}{|]}} \begin{array}{l} \mathsf{letrec}\ \mathtt{pow}\ =\ \lambda n : \mathsf{Int}. \lambda x : \mathsf{\color{blue}{Code[Int]}}. \\ \quad \mathsf{if}\ (\mathsf{eqInt}\ n\ 0)\ \color{blue}{[|}1\color{blue}{|]}\ \color{blue}{[|}($x * $(\mathtt{pow}\ (n - 1)\ x)\color{blue}{|]} \\ \mathsf{let}\ \mathtt{mkpow}\ =\ \lambda n : \mathsf{Int}. \qquote{\lambda y : \mathsf{Int}. $(\mathtt{pow}\ n \qquote{y})}\\ \end{array} $$ <p>The notation was (probably) not designed to scare you: <ul> <li>$\qquote{\ldots}$ is quasi-quotation. What is between these brackets is <i>code</i> <li>$\$(\ldots)$ is used within a quasi-quoted expression to <i>splice</i> code. </ul> <p>Now $\mathtt{mkpow}\ 3$ is going to give us the code of the desired power function: $$ \begin{array}{l} \qquote{\lambda y : \mathsf{Int}. $(\mathtt{pow}\ 3\ \qquote{y})}\\ \qquote{\lambda y : \mathsf{Int}. $(\mathsf{if}\ (\mathsf{eqInt}\ 3\ 0)\ \qquote{1}\ \qquote{($\qquote{y} * $(\mathtt{pow}\ (3 - 1)\ \qquote{y})})}\\ \qquote{\lambda y : \mathsf{Int}. $\qquote{($\qquote{y} * $(\mathtt{pow}\ 2\ y)})}\\ \qquote{\lambda y : \mathsf{Int}. $\qquote{y} * $(\mathtt{pow}\ 2\ y)}\\ \qquote{\lambda y : \mathsf{Int}. y * $(\mathtt{pow}\ 2\ y)}\\ \qquote{\ldots}\\ \qquote{\lambda y : \mathsf{Int}. y * (y * (y * 1))} \end{array} $$ <p>There can be nested levels of $\color{blue}{\mathsf{Code}}$ and this explains the term multi-stage programming. For objects of this type to be really useful, there should be a method $\mathsf{run}: \color{blue}{\mathsf{Code}[T]} \rightarrow T$.</p> <p><i>Macros</i> are very related to the above: a macro gives the programmer a way to give a definition that is <i>elaborated</i> at compile time. This is similar to what is going on in splicing. The special case is that it is the compiler that calls $\mathsf{run}$; in other words,the context in which a macro is elaborated is the "compile-time stage". The design of <a href="https://dotty.epfl.ch/docs/reference/metaprogramming/toc.html">Scala's metaprogramming facilities<a/> is consistent with all this, and I picked this up from <a href="http://biboudis.github.io/papers/pcp-gpce18.pdf">"A practical unification of multi-stage programming and macros"</a> by Stucki, Biboudis, Odersky.</p> <p>That's all; this is shallow as all my posts and there would be more to say; "Modal analysis of staged computation" by Davies and Pfenning would be a good place to continue for the curious and/or impatient. I'd like to connect these views on staging and macros and modal logic, but that's for another day.Burak Emirhttp://www.blogger.com/profile/01651698653168460072noreply@blogger.com0tag:blogger.com,1999:blog-28320851.post-32584432221490413662020-07-04T01:53:00.003-07:002020-07-04T02:16:52.847-07:00Language, Logic and Modeling <blockquote> Language is the parrot of thought, and a hard to teach one, nothing more. („Sprache ist der Papagei des Gedankens, und ein schwer gelehriger, nichts weiter.“) ― Friedrich Hebbel </blockquote> <p> Here's a short note about a perspective on logic for informatics, and decidedly not mathematics. </p> <p> Formal logic has its roots in formalizing mathematics. Expressing the steps of reasoning in a formal system would assure us that there are no "gaps". I wrote a brief account on <a href="https://blog.burakemir.ch/2020/05/intuitionistic-propositional-logic-and.html#begin">the beginnings of formal logic</a> elsewhere. </p> <p> The 20th century brought with it the advent of machines for information processing. This has gone way beyond mathematics already. In informatics (a better word for "computer science") one deals with information processing, and its practical kin, software engineering, deals with systems of all kinds where information gets in contact with reality. </p> <p> There is a view that holds that we can only understand reality through models (see <a href="https://en.wikipedia.org/wiki/Constructivist_epistemology">constructivist epistemology</a>). Now, whether one subscribes to this or not, it is obvious that informatics relies on models of reality; information itself is always part of some model of reality. If you worked with customers or product managers, you know it is shaky and always incomplete series of models of reality, always adapting to new circumstances. </p> <p> All I am saying is: formal logic can and should help us as a tool for formulate precisely these models. Like in mathematics, we do not want to have "gaps" in our reasoning. This is a somewhat ambitious direction, as software systems are not always built according to specification, and specifications are usually prose. And it is not only about modeling the domain. </p><p> "Functional programming" has gathered a large following in recent years, and I think this is because less because "software should be mathematics", but programmers benefit from constructing their code according to models that leaves no "gaps". The tool at hand, a (statically typed) functional programming languages, forces one to think thoroughly about data and the operations that are going to be carried out. </p> <p> One does not need to resort to philosophy for challenging this. Large software systems are distributed over many machines, and gives rise to challenges that require methods different from those that are mathematics (and different from programs that run on a single machine). While <i>functional requirements</i> lead us to modeling domain concepts, designing a system that meets those is constrainted by <i>non-functional requirements</i>. We need to consider of performance and machine capacity, and evaluate which underlying technologies to build on, what guarantees they provide. It is here that engineering can be meaningfully distinguished from programming, and work is influenced less by theorems, more by best practices and experience. </p> <p> Engineering and domain modeling are full of abstract models (the favorite tool being: diagrams of boxes with their meaning provided by handwaving). Formal logic and actual, manifest models where people could experiment are not encountered very often, yet expressing these models, reasoning with them and testing/verifying them would be critical since mistakes are costly. Moreover, a large number of people have to understand and act in accordance to these models. Often, the only time where this becomes obvious are the APIs between subsystems, a point where much modeling has already happened and cannot be changed. </p><p> I will go further and claim that Conway's Law is a consequence of each team forming their own "unit of modeling". It would be a lot better when we learnt to communicate explicitly and precisely what our models are, explain our needs to model things differently and how we can translate between our different models instead of insisting on some "right" way of seeing something. I place my hopes in logic as a language and tool to do just that. Mathematics is a discipline where the practice of modeling a mathematical "universe" has advanced very far, and it is no wonder that people seeking to use precise methods turn to it. It is a mistake to interpret this field of application of formal logic as some sort of mathematical monopoly. Language is a tool to communicate, a logic is a style of language. </p>Burak Emirhttp://www.blogger.com/profile/01651698653168460072noreply@blogger.com0tag:blogger.com,1999:blog-28320851.post-72773516488673801312020-07-02T01:19:00.004-07:002020-07-02T01:27:58.016-07:00I don't know how to Semantics<p>I missed out my daily writing practice of bad, unfinished, unedited thoughts. So why not combine it with idle thoughts on logic. The title alludes to the friendly LOGO <a href="https://el.media.mit.edu/logo-foundation/what_is_logo/logo_programming.html"> error message</a>, and this exchange on Twitter: <blockquote class="twitter-tweet"><p lang="en" dir="ltr">Semantics can never be given in terms of axioms. Theories are given in terms of axioms. Semantics are given in terms of interpretations and models.</p>— Andrej Bauer (@andrejbauer) <a href="https://twitter.com/andrejbauer/status/1278368000127700994?ref_src=twsrc%5Etfw">July 1, 2020</a></blockquote> <script async src="https://platform.twitter.com/widgets.js" charset="utf-8"></script> <p>Andrej is not wrong, but axiomatic semantics is a thing in PL. So I'm collecting now the two different contexts for the word and idea of "semantics", the mathematical logic, one in programming languages. I am generally trying to embrace multitude of meanings in different contexts, but given the strong connections between PL and logic, it is worth pondering what is going on here. <h1>Linguistic meaning: the writing on the wall</h1> <p>The word "semantics" comes from σῆμα "sign", "marking" and refers to the study of meaning (somewhat associated with linguistics, the study of language). Now meaning a very broad thing to study, and how we create meaning and interpret signs proper (beyond language) leads us to <i>semiotics</i>. I promise I am not going to spiral into ever higher level of abstractions and get back to the more mundane divergences of academic disciplines; still I want to share a story to show the depth of the question "how we make meaning of something" and what meaning really is. <p>Here is an example of linguistic and semiotic meaning. I believe I had heard and understood in English the idiom "the writing is on the wall" long before I heard the <a href="https://en.wikipedia.org/wiki/Belshazzar%27s_feast">biblical story</a>. It is hard to tell, since I do not remember. Maybe I looked up the German word "Menetekel" – which is the same but doesn't sound as cool and portentous – and then forgot about it. Maybe I learned and chose to forget biblical reference. English is full of references to stories that become idioms and detached from their origins (I also learned very recently that "drinking the Kool-Aid" originates from a Jonestown massacre), yet somehow they convey meaning. <p>Conveying meaning in written language can only happen through more written language; however, language is definitely more than what is written. I claim it's possible that even if you never heard this story from the Bible and have no access to it, you can nevertheless learn the full meaning of the saying "the writing is on the wall" when you witness it being used in some situation that discusses the terrible ending to come. Whether that is from the textual, narrative context, or rather from the faces, gestures of the participant or from your understanding of the reality of the thing it is said in reference to, that's hard to tell. <blockquote>Kamus writes how many meanings each word has, then, I wondered and tried to write down a Kamus showing, to how many words a meaning corresponds. (Bediuzzaman Said Nursi) </blockquote> <blockquote> Wisdom is a tree which sprouts in the heart and fruits on the tongue. (Ali ibn Abi Talib) </blockquote> <p>Now, back to our divergent use of "semantics" in mathematical logic and programming languages. <h1>Semantics in mathematical logic</h1> <p>The predominant view in mathematical logic today is Tarski's conception of semantics and truth, <a href="https://plato.stanford.edu/entries/logic-classical/#Sema">adapted to model theory</a>. This is what is taught to undergrads, and I shall like to deconstruct the assumptions that come with it: <ul> <li>Statements are either true or false, or otherwise put, there is an objective notion of truth that we try to capture with logic languages.</li> <li>The meaning for a logical model is given by describing the interpretation of the language in a model</li> <li>The way we communicate or describe a model and the interpretation are well understood.</li> </ul><p>All issues one may have are subdued with the last point. You cannot really question why models are given in "informal set theory" because the whole meaning exercise <i>requires</i> some common ground. If one ever wonders how to give meaning to those things that are supposed to provide the well-understood meaning, one finds that formal set theory is relatively disconnected from what one is trying to do (and turns to type theory). <p>I find it very striking that at the time Tarski formulated his original criterion for defining truth, the lingua franca was higher-order logic (which is equivalent to simple type theory), not set theory. I do feel something was lost there. Classical higher-order logic, we are concerned with mapping to truth values, but point being about the mappings (not the "sets"). So rather than being the "informal set", a grouping of objects by some property, or a formal set which described by an obscure string formal set theory, a "higher-order logic" set is defined by its mapping. A "woman" is defined by a mapping <pre>woman: Individuals -> {true, false}</pre> <p>... and you can argue when it returns true. I find this closer to discussions on meaning in the linguistic sense. <h1>Semantics in programming languages</h1> <p>In programming languages, "semantics" refers to the "meaning" to programming language (this is actually a surprising and confusing statement). Now, "meaning" is something a lot narrower than linguistic meaning. <ul> <li>Something does not have to be universally and objectively true to give "meaning"</li> <li>The conventions we can rely on when giving meaning are entirely different. </li> <li>The way we communicate or describe a model and the interpretation are well enough understood.<li> <p>"Meaning" can be something like a specification that clarifies what statements and expressions of the language do. In practice, this is all informal. In research, there are three "styles": <ul> <li>Denotational semantics. We choose to embrace the conventions of mathematics as our medium for conveying meaning as objective truth</li> <li>Operational semantics. The meaning is in the behavior, the execution of the programs. See what this interpreter does? That is the meaning. <li>Axiomatic semantics. The meaning of a program is properties you can state. </ul> <p>This distinction is entirely standard and helpful, I learnt these as undergrad and e.g. Benjamin Pierce "Types and Programming Languages". It is also largely ignored in industry, where a natural languages specification relying on conventions is all you get for "meaning" (and stackoverflow may cover the gaps). <h1>Finale</h1> <p>Programming actually has its roots in formal logic. It is not a coincidence that the act of setting up formal languages and the burden of having to define meaning is shared by people who mathematical logic (ok, I should say formal logic to not exclude the philosophers) and programming languages, and that informaticians take an interest in proof assistants and machine-checked proof. <p>Observe that all styles leave open a path to full formalization, but for axiomatic semantics it is somewhat odd since we'd have to formalize logic in logic. The boundaries are also not as clear-cut, since one can also describe an interpreter by some mathematical structure (labeled transition systems) and there is als formal methods, verification and the "theory of a program" and how it needs some semantics to build on (not necessarily "the" semantics). The taxonomy is still very useful, though, since it says something about the intention. <p>My personal hobby is to study stacking systems of logic and reasoning, "compositional". Category theory is useful here, precisely because it permits to consistently ignore what is the foundation, and describe everything in terms of mappings (arrows), just like higher-order logic. I shall leave this topic for another post. Burak Emirhttp://www.blogger.com/profile/01651698653168460072noreply@blogger.com0tag:blogger.com,1999:blog-28320851.post-20166079838269374362020-06-29T13:07:00.001-07:002020-06-29T13:07:29.932-07:00Go Karma Schizophrenia<p> Today, I tried to busy myself with doing some programming exercise in go. Not the board game, the language. Here are some idle thoughts on programming languages. </p> <p> I had forgotten how good I can be at procrastination. I drank about a dozen cups of tea (soar throat), checked Twitter and mail... it was terrible. </p> <p> If you studied programming languages, there are many things that seem off about golang. I could even understand if the lack of generics had been justifed as: "we value type inference and structural subtyping higher than having generics." but no, they are on the record as "we never needed them". ok... Opinions on programming language design are a dime a dozen. </p> <p> I do not aspire to exercise my golang skills because it is a "great programming language", or the appeal to simplicity. Following a change of role at work, I simply expect to be dealing with a lot of go code in my job. That, and "modern C++". It's as simple as that. And sure enough, you can get used to anything. </p> <p> When I was a grad student working on a certain programming language, I remember how convinced we were that we'd be changing the world. And indeed we have, but then a whole bunch of others have, too. Golang did not exist back then, and neither did "modern C++". I did have some experience in C++ programming, from peeking at programming courses for math students during uni (absolutely nothing related to actual C++ in my informatics curriculum, though there was a lecture on object-oriented programming) and my two internships. And also learned some C from a friend, one seminar and then many hours of toying around. This changed with my job of course, but the point is I perceived this dichotomy betwen programming as taught and research in academia and programming as done in industry or applied research contexts "with an industrial flavor". As grad students working on a language, we were convinced we have the best language, and determined to make it bestest, but I had seen "the other side". </p> <p> I continued to dabble in C++ in those times, because I was sure I'd need it someday, it's the reality that cannot be ignored when studying programming languages. I described this as "professional schizophrenia": you split yourself into the part of you that accepts the "industry" programming languages, and the part that supports progress based on research which mostly gets ignored. I was then quite surprised when C# 2.0 got generics and Java 1.5 quickly followed suit. It looks like pattern matching makes it into the mainstream, too. </p> <p> Another way to view this discrepancy is pragmatism: you use what's there, things are the way they are due to path dependence, economic pressure or some other reason that was of concern to somebody. Languages are tools, no need for judgment, just move towards the objective. This is my modus operandi, but it feels good to know that a domain-specific language that is using elements of an advanced type system is sometimes the thing that moves you towards that objective. Still, I am surprised how much one can be stuck in "my programming language is my identity", at myself in hindsight but also all the fan discussion. </p> <p> Today, after procrastination, I think there might also be some karma. While I had not been too vocal, I'm sure in a weak moment, I must have said something like "golang is not a good language" aloud. I remember looking at source code that I considered to be looking like line noise (the inline & turned me off), surely I must have expressed my disaffection a few times. After a whole day of pain-avoidance behavior – the other name for procrastination – I realize now that maybe this is what has been hunting me. I have been paying for my remarks, dearly. Wake up, Burak, it's time to do your exercise and show how much of a pragmatic you really are. </p> <p> On the bright side, after all this suffering, I am hoping to be born again as a happy gopher tomorrow. </p>Burak Emirhttp://www.blogger.com/profile/01651698653168460072noreply@blogger.com1tag:blogger.com,1999:blog-28320851.post-56770196493795953402020-06-28T05:09:00.003-07:002020-06-28T05:17:24.237-07:00Nothing is real<div style="width: 80%"><blockquote>We are the children of concrete and steel<br />This is the place where the truth is concealed<br />This is the time when the lie is revealed<br /> Everything is possible, but nothing is real</blockquote><p style="text-align:right"><a href="https://www.youtube.com/watch?v=1HbF3EAt3ck">Living Colour - Type</a></p></div> <p>Ryuichi had to get out. The kids were running around the house and screaming, their juvenile energy levels blending with the boredom of a cloudy Sunday morning to form an explosive mixture. He checked the viral pollution levels and left unceremoniously. It wasn't ok to leave Lingzhao alone with the kids, but still he had to get out or the noise and shouting would get to him and erase all possibility of having thought. He needed space to think, and silence. <p>The streets had not yet filled with people, only a few families strolled, then shining beautiful Black faces gleaming in the African sunlight. The Chromatic Cure had worked its wonders, he thought. After the first "baby pandemic" of 2020 and the 2nd civil war for racial justice came the Dinosaur diseases, caused multiple strains of airborne virus out of the remnants of Siberian permafrost. It affected people with light skin color disproportionately, which had eventually led to a treatment. The skin's pigments were modified to transform UV light into virus-defeating proteins, but it needed Black pigments to work, a lot of them. Ryuichi vaguely remembered from his medical biochemistry class in kindergarten. "And that is why everybody is Black now", the teacher had said. The Unified Society logo on her uniform caught his eye, it was one of his earliest memories. <p>Weekend or not, those remaining pesky white supremacists were still trouble, Ryuichi thought. Their enclaves were on life support from the Unified Society. Large compounds, former prison complexes that had been sealed airtight and determined to hold out, fueled by conspiracy theories and delusion. Endless debates had ensued whether the move to reactivate the old underground supply channels to keep them alive was a justified use of scarce resources. Compassion won out and the prison enclaves became the only refuge for white people who preferred living with other fellow white male supremacists. <p>Ryuichi worked on the team that intercepted electomagnetic signals and gathered intelligence. Two individuals, bugmold and GDJ, had emerged as leaders, one based on ideology and the other seemingly controled the software that kept the modified prison complex working. Very little was known about the fate of women in the prison complexes; there had been many who chose to give up freedom to retain their skin color. But violent fights had erased all but a few, who willingly accepted bugmold's tales of superior intelligence and the day of revenge, and GDJ's programming school of thought in order to keep the automated manufacturing going and launch InfoSec and fake news attacks on the Unified Society. The brutal fights in the White Supremacy enclaves had become a pattern that repeated itself once a decade. They even had a name for this: Natural Selection. <p>Ryuichi found a bench and took a break from his walk. It would soon be time. <p>Compassion also was the argument that led to the Plan being accepted. The intelligence gathered from Ryuichi's group revealed more and more signs that the crooked ideology and its implementation through software protocols led to prisons being a place of crimes against humanity and intolerable suffering, and human rights group revolted. The cure had been perfected and encapsulated; no complicated week-long procedure was necessary anymore, the biochemists of the Unified Society had created a new airborne virus that would set off the process of transforming the skin color. Everybody has a right to keep their skin color, the chairman of the assembly said, as long as it is Black. Even the most ardent supporters of "tolerance" could not bear the tales of natural selection. <p>Ryuichi checked his watch: it was noon. The sealed gray metal boxes with today's supplies to the prisons, delivered synchronously on a global automated schedule, just opened with a click. He smiled and thought, now that White is history, the team will be asking for a new project tomorrow. But it was the weekend, after all. Time to get back and see what the kids are up to.Burak Emirhttp://www.blogger.com/profile/01651698653168460072noreply@blogger.com0tag:blogger.com,1999:blog-28320851.post-51723112765640278682020-06-27T07:18:00.002-07:002020-06-27T07:18:46.642-07:00Gotta not-do what you gotta not-do<p>Today's writing is about <a href="https://en.wikipedia.org/wiki/Wu_wei">wu wei</a> - "effortless action". <p>Wu wei comes from Taoism. I don't know very much about Taoism. I do not have a plan to become a thought leader on Taoism. My only personal connection to Chinese philosophical tradition is that I learned Tae Kwon Do as a teenager and I got curious about the "way" teachers description that the roots of these are not "martial arts" but a "philosophy (or way) of movement". <p>I was intrigued by this idea that "attaining perfection (of movement) through practice" was not for becoming be perfect fighter, but to attain wisdom. I went on to read Tao Te Ching, I Ching, and while I did not understand much, I became sensitive to these topics. [A friend at work, big fan of martial-arts, laughed at me for taking this philosophy stuff seriously - but then, "If the Tao would not be laughed at this way, it would not be the Tao!"] <p>The point is, you don't have to be an philosopher to appreciate that the idea of effortless action is one that is found in many forms and cultures. It's a pattern easy to observe, if you have the eye for it. <p>Take "<a href="https://en.wikipedia.org/wiki/Flow_(psychology)#Professions_and_work">flow</a> or "being in the zone". There are three conditions for flow to occur: <ul><li>Goals are clear <li>Feedback is immediate <li>A balance exists between opportunity and capacity </ul> <p>This resonates with the programmer working on a software artifact, with the amateur musician playing a piece on the piano, when doing sports (I am going to stop here). It also resonates indirectly, think of a manager who wants to create a productive environment for their team. <p>The analytical, self-conscious and doubtful mind takes a back seat and you "become one" with the thing you are doing. There is a view of taoism where this is not only at the individual level, but also at the level of society; this is what is described in the Tao Te Ching and I Ching (and discussed in the wikipedia article on wu wei). <p>It should be intuitive (if somewhat illogical in the realm of words) that "effortless action" requires to put in some effort. The programmer had to learn many things in order to get flow state, hacking away at this code. The musician had to practice countless hours in order to be able to play the piece. That effort is not felt in the moment of flow. You are building up capacity. <p>Building up capacity, if done right, is subject to compounding effects. You don't want to only learn, but also watch your rate of learning. Is there a better method? Am I still working towards the same goal? <p>Now, when we think of "goals" we are slightly at odds with "effortless action" and the Taoist ideal, since setting a goal can already be "forced." "Effortless action" is not forced, and also not goal-oriented. <p>Effortless action and taoism in general are not concerned with values or morality. Taoism features the theme of man's inability to force changes upon society. This is a bit revolting at first sight, since today's society is full of injustice. Black lives matter! <p>Things start making a bit more sense when one thinks of the attempts to shape a society or the world by ideology, this is bound to fail or become unsustainable (let's count capitalism as the ideology of $$$ here). You can have the best law, it is not by itself going to guarantee justice. Or, when you think of some software to help with some process, it is helpful to look at those things that do not need to be specified - because it is hopeless to specify them and you need to rely on "common sense" or some norm or culture. Also related is: "There is no way to peace, peace is the way" <p>The effort on the society level could be framed as "building the capacity and opportunity for justice". So an abstract, non-descript goal, direction, objective like "justice" or "I want to be good at X" is ok and not a goal in the sense of forcing some direction; we call it a goal since we live in times where we have an extraordinary freedom in choosing what we want to do with our lives. Choices do not preclude wu wei. I'm inclined to think that wu wei is related authenticity and focus; not getting distracted in futile matters, not doing something because you feel you have to, not doing something for purely "political" (tactical) reasons, but focus on the essence. Another quote that comes to mind here, "Never mistake activity for progress." <p>There is a lot more to find out about this. I am not an expert on Taoism and I still do not intend to become one; but maybe there's a reminder here for looking at one's own patterns of doing and not doing.Burak Emirhttp://www.blogger.com/profile/01651698653168460072noreply@blogger.com0tag:blogger.com,1999:blog-28320851.post-60983330240856654062020-06-26T03:23:00.002-07:002020-06-26T04:57:10.483-07:00Life has a limit but knowledge has none<blockquote>“Your life has a limit, but knowledge has none. If you use what is limited to pursue what has no limit, you will be in danger.” Zhuangzi</blockquote> <p>If DRMacIver in <a href="https://notebook.drmaciver.com/posts/2020-06-25-13:28.html">Subsetting life</a> can draw an analogy between the limited nature of consciousness / the human condition and how C++ programmers effectively know and use only a fraction of the C++ standard — a little subset of the possibilities — then I can do the same with Zhuangzi's warning and ... something. Thank you, David.</p> <p>I have written on intuitionist mathematics and logic <a href="https://blog.burakemir.ch/2020/05/intuitionistic-propositional-logic-and.html">before</a>, so today it's not going to be about effective computations being terminating ones, potential infinity vs actual infinity. No, let's talk about models.</p> <p>I found this quote in a little notebook of mine, an entry dated 20 years ago. I had an actual habit of writing notes, long-winded streams of consciousness, full of philosophy and reflection; a note on the fleeting nature consciousness next to a bitter account of unrequited love; a seemingly never-ending search. I threw away most of those notebooks, but for some reason I kept this one and found it the other day while clearing out the basement.</p> <p>A model is always a simplified image of something. That something is not necessarily "reality"; sometimes the thing that is simplified is just another model, or construction. When you make a model of anything, you leave out some parts (attributes) and retain others (Herbert Stachowiak calls this <i>Verkürzungsmerkmal</i>, or attribute of abstraction). Constructing models is key activity for the informatician, and I am glad that undergrads <a href="https://files.ifi.uzh.ch/rerg/amadeus/teaching/courses/inf_II_fs10/inf_II_kapitel_02.pdf">get exposed to Stachowiak's model theory</a> (slides in German). <p>Any programmer engages in the business in creating models. Every data structure is part of a model, and data structures serve different purposes. What is right in one context is not appropriate in another. <i>Premature optimization</i> is the adapation of code (and frequently also data) for the purpose of optimizing the performance of some operation, often losing flexibility in the process, and making other operations slow or more expensive. Premature means it's not good and the implicit recommendation is that it is good to keep things general until you know what operations you need.</p> <p>Let's not forget about consciousness and life. As a human, you need to make models, too. You need to understand and interact with your fellow humans, understand the consequences of your actions. This is not easy. For goodness sake, you need to understand yourself. What was it that made me throw those notebooks away? </p> <p>There is this theory that there is only one consciousness. Carl Gustav Jung writes about this in Typologie, as a key pitfall that makes some introverts insufferable (I felt seen) since they tend to confuse their <i>self</i> and that universal consciousness. "One love" stands for universal love and respect for all beings, no distinctions. Religion literally means connecting your self with the One. The vedic religion talks about consciousness. You don't need to reach for religion for accepting a universality of the basic conditions and limitations in which consciousness and thinking occurs, though. <p>But not only life is limited, what each individual knows and is able to do with the knowledge they have, is limited, too. Every human is absolutely unique, not only in the various social or economic conditions they are born in, but also in their continued perspective. I like this opening phrase from Jean Piaget's <a href="https://www.marxists.org/reference/subject/philosophy/works/fr/piaget.htm">genetic epistemology</a> <blockquote>GENETIC EPISTEMOLOGY attempts to explain knowledge, and in particular scientific knowledge, on the basis of its history, its sociogenesis, and especially the psychological origins of the notions and operations upon which it is based.</blockquote> <p>Piaget set out to study the development of children, and ended up with a theory of knowledge. Is it constructed models all the way down? Piaget classifies knowledge into three kinds, phyical, social and "relationships" which he calls logico-mathematical models. Why does he calls the latter thing "logico-mathematical"? I think because mathematics and to some extent logical descriptions of mathematics are used here as synonyms for a kind of knowledge where we are completely free to choose what we are modeling and why; at the same time the models are supposed to be <i>consistent</i>. We want to be sure that "they work", that they are good for something. <p>Now, the human condition <i>isn't</i> only about the subject. Different people have different models, so it is not unusual that programmers end up spending 50% or more of their time resolving their disagreements about models. True progress is when people learn something from each other, bridge differences, teach each other and enable others to overcome their limitations. <p>I don't know how I end up on taoism, but these are the "ten thousand things". Consciousness is not limited, but each individual slice of it is. And there are challenges and limits to collaboration, many of them stemming from social, economical limitations or preconceptions. Every individual does feel the need to belong; this is where rationality itself reaches its limits. <p>I am just going to end here. A lot of disagreements would be easily bridged if we made more of an effort to understand each other: what models, what limitations, what perspectives makes the people say what they say, do what they do. It is not consciousness that is limited; it is its manifestation in the individual — and sometimes just the willingness to take the perspective of the other.Burak Emirhttp://www.blogger.com/profile/01651698653168460072noreply@blogger.com0tag:blogger.com,1999:blog-28320851.post-77251080672581700342020-05-30T16:58:00.001-07:002020-07-01T08:10:22.088-07:00Intuitionistic Propositional Logic and Natural Deduction<p>Here is a primer on intuitionistic propositional logic and natural deduction, along with a brief account of their historical context. The bits of historical context may explain why formal logic is somewhat entangled with the discourse on "foundations of mathematics". I frankly wanted to research and type up my notes on something else, but then started "writing backwards" and thought I'd make this the introductory notes I wish I had before learning about these things. <p>The topics here can be seen as useful background knowledge for anyone learning about sequent calculus, proof theory, Martin-Löf type theory (aka intuitionistic type theory aka constructive type theory), the Curry-Howard-Lambek correspondence with category theory and particularly cartesian closed categories, categorical logic; I may write about these connections at another time, but this is supposed to be an accessible overview. We proceed according to this plan: <ul><li><b><a href="#begin">The beginnings of formal logic</a></b> - a few words on Frege, Russell <li><b><a href="#axiomatic">Axiomatic proof systems</a></b> - a few words on Hilbert systems, with modus ponens as only inference rule <li><b><a href="#intuit">What is intuitionistic logic?</a></b> - a few words on Brouwer, Heyting, Kolmogorov <li><b><a href="#gentzen">Gentzen's natural deduction and sequent calculus</a></b> - Gentzen creates the field of structural proof theory <li><b><a href="#intnj">Intuitionistic natural deduction NJ</a></b> - a presentation of natural deduction NJ calculus for intuitionistic propositional logic <li><b><a href="#natseq">Natural deduction in sequent calculus style</a></b> an alternative presentation that groups assumptions into "judgments" $\Gamma \vdash A$ </ul> <p>The focus on propositional logic is for simplicity, and to build a bridge to my <a href="https://blog.burakemir.ch/2020/04/higher-order-logic-and-equality.html">last post</a> where I briefly described the Curry-Howard correspondence. One can add symbols and rules for quantifiers to obtain first-order predicate logic, and the inclined reader should consult the more detailed texts on these topics, but proof terms would no longer be simply-typed $\lambda$-calculus, and then with higher-order logic there is also a different way to talk about sets of individual objects. <h3><a name="begin">The beginnings of formal logic</a></h3> <p>For ages, mathematics involved theorems and proofs. Yet, the beginning of <i>formal</i> logic is marked by Gottlob Frege's and Bertrand Russell's works. <p>Gottlob Frege described a graphical notation that captured sound reasoning in precise rules. It was the beginning of the logicist program of demonstrating that all mathematics could be carried out in formal logic. Bertrand Russell advanced this program considerably, also inventing type theory in the process. <p>Formal logic, symbolic logic refers to logical statements and reasoning being expressed in a <i>formal language</i>, with <i>symbols</i> representing logical statements and <i>rules of symbol manipulation</i> describing how to obtain new statements from existing ones, preserving validity. Frege and Russell's deductive systems of reasoning are today presented as Hilbert systems (more on those below). <p>Georg Cantor had proposed set theory as a foundational system earlier, but a formal account of set theory had to wait until Zermelo published his axioms. People (including Cantor himself) had found paradoxes in set theory or what they understood to be set theory. The search for the "right" foundations continued, while alternative systems of formal logic were proposed (for example, Schönfinkel's combinatory logic, which does not use any variables). The period that followed the beginnings saw what is today called the foundational crisis of mathematics. <h3><a name="axiomatic">Axiomatic proof systems</a></h3> <p>David Hilbert and his school systematized mathematical logic, the application of formal logic to formalizing mathematics, and "metamathematics". <blockquote>Around 1920, the “Hilbert style” axiomatic approach, as it is often called, was known to everyone and dominated the logical scene [1]</blockquote> <p><a href="https://en.wikipedia.org/wiki/Hilbert_system">Hilbert systems</a> are a uniform presentation of formal deductive reasoning and can be treated as synonymous with axiomatic proof systems. There are "logical axioms" that capture the process of sound reasoning and there are "mathematical axioms" which would capture the mathematical theory we want to formalize. The only inference rules are the <i>rule of substitution</i> and <i>modus ponens</i>. A proof is a sequence of formulas, with each term being either an axiom or derived from a previous terms by application of the inference rules. If formula $A$ is derivable this way this is written $\vdash A$. <p>Later, John von Neumann refined axiomatic proof systems using <i>axiom schemes</i>: axioms schemes contain metavariables, and all instance count as axioms and <i>modus ponens</i>becomes the only rule. Here is such a Hilbert-style axiomatic proof system for propositional logic, due to Jan Łukasiewicz: $$ \begin{array}{l} 1.~ A \rightarrow (B \rightarrow A) \\ 2.~ (A \rightarrow (B \rightarrow C)) \rightarrow ((A \rightarrow B) \rightarrow (A \rightarrow C)) \\ 3.~ (\neg B \rightarrow \neg A) \rightarrow (A \rightarrow B) \\ \end{array} $$ <p>The rule of Modus Ponens lets us infer from $A \rightarrow B$ and $A$ the new formula $B$. Note that the other operators $\vee$ and $\wedge$ are considered as abbrevations; this is referred to as Ockham's razor, since it uses a minimum of primitives $A \vee B$ is turned into $\neg A \rightarrow B$, and $A \wedge B$ is $\neg (A \rightarrow \neg B)$. <p>It is amazing that this axiomatic proof systems is <i>complete</i>: all tautologies can be proven from a few axiom schemes. Many texts on formal logic still use axiomatic proof system, presumably because these permit to talk about formal logic with a minimal amount of machinery for syntax and logical axioms; appealing to the <a href="https://en.wikipedia.org/wiki/Deduction_theorem">deduction theorem</a> and admissible rules hide much of the structure of the proof, which is not considered interesting to begin with; all that matters is the fact that something can be proven. </p> <p>[1] Stanford Encyclopedia of Philosophy <a href="https://plato.stanford.edu/entries/proof-theory-development/">The Development of Proof Theory</a></p> <p>[2] Stanford Encyclopedia of Philosophy <a href="https://plato.stanford.edu/entries/logic-firstorder-emergence/">The Emergence of First-Order Logic</a></p> <h3><a name="intuit">What is intuitionistic logic?</a></h3> <p>L.E.J. Brouwer rejected the idea that logics would be prior to mathematics and proposed <i>intuitionism</i> as philosophy of mathematics; he understood mathematics as mental constructions that come first, before any rendering in logic. </p> <p>We summarize the intuitionist positions as follows: <ul><li>An intuitionist rejects the law of the excluded middle $A \vee \neg A$ when $A$ is a statement about infinite sets</li><li>An intuitionist rejects the idea of an "actual infinite" altogether and regards infinity as "potential infinity", approximable by repeating a process, but never completed.</li><li>An intuitionist aims to base all mathematics on effective constructions that can be verified based on intuition (such as the intuition of being able to carry out a countable number of steps)</li></ul> <p>These restrictions avoid paradoxes of naive set theory, at the cost of rejecting large parts of established mathematical practice. <p>Brouwer's student Heyting [3] and also Kolmogorov [4] formulated axioms of intuitionistic logic as axiomatic proof systems. <p>Their efforts is what enabled further discussion (beyond philosophical disagreement) and defined intuitionistic logic [5]. Today, maybe due to the multiple available proof systems for formal logic, many sources define intuitionistic logic informally by reproducing the "Brouwer-Heyting-Kolmogorov (BHK) interpretation" [<a href="https://en.wikipedia.org/wiki/Brouwer%E2%80%93Heyting%E2%80%93Kolmogorov_interpretation">wikipedia</a>]. This is a description of what is an acceptable proof. We skip a discussion of these because we will be giving natural deduction rules below. The key part is that structure of the proof matters now; the structure contains the constructions that the intuitionist cares about. <p>Intuitionistic <i>logic</i> is only considered one form of constructive logic and constructive mathematics; there are other logics that are compatible with the BHK interpretation, and multiple possible interpretations of the word construction and function, and then I think most mathematicians do not bother to formal logic (while still caring about precise language and rigour). A different system of logic that is motivated by enabling constructive reasoning is Martin-Löf type theory (also called <i>intuitionistic type theory</i> or <i>constructive type theory</i>). <p>[3] Arend Heyting (1930). Die formalen Regeln der intuitionistischen Logik., 3 Teile, In: Sitzungsberichte der preußischen Akademie der Wissenschaften. phys.-math. Klasse, 1930, 42–56, 57–71, 158–169. </p> <p>[4] Thierry Coquand. <a href="https://www.researchgate.net/publication/227266974_Kolmogorov's_contribution_to_intuitionistic_logic">Kolmogorov's contribution to intuitionistic logic.</a> 10.1007/978-3-540-36351-4_2. <p>[5] Stanford Encyclopedia of Philosophy <a href="https://plato.stanford.edu/entries/intuitionistic-logic-development/">The Development of Intuitionistic Logic.</a></p> <h3><a name="gentzen">Gentzen's natural deduction and sequent calculus</a></h3> <p>Gerhard Gentzen was David Hilbert's student and sparked the entire field of structural proof theory with his seminal article and thesis that introduced both natural deduction and sequent calculus - in both intuitionistic and classical variants [6]. <p>Gentzen was not the first to invent natural deduction. Jan Łukasiewicz observed that the axiomatic systems of Frege, Russell and Hilbert do not reflect mathematical practice; mathematicians start their proofs from assumptions, applying "proof figures" to derive new results. His student Stanisław Jaśkowski came up with a calculus of natural deduction, where derivations from assumptions are drawn in boxes [7]. <p>Gentzen pointed out that proofs come with structure: a proof may contain a "detour", and such detours can always be removed, yielding a normal form. In order to demonstrate this, he reformulated natural deduction as the <i>sequent calculus</i> and the normalization through his celebrated cut-elimination theorem. In the sequent calculus, lists of formulae are manipulated (the word <i>sequent</i> is a random word chosen by Kleene to resemble "Sequenz"). Much later, Dag Prawitz proved the corresponding normalization result directly for natural deduction [8]. <p>Natural deduction captures formally and precisely reflect how a mathematician proves things while still admitting a "natural" understanding of what is going on - unlike axiomatic proof systems where structure is artificial and ultimately does not matter as it was obtained by modus ponens and axioms. <p>[6] Gentzen, Gerhard Karl Erich (1935). "Untersuchungen über das logische Schließen. I". Mathematische Zeitschrift. 39 (2): 176–210. doi:10.1007/BF01201353. (<a href="https://gdz.sub.uni-goettingen.de/id/PPN266833020_0039?tify={%22pages%22:[180],%22view%22:%22info%22}">link</a>)</p> <p>[7] Stanisław Jaśkowski (1934) <a href="https://www.logik.ch/daten/jaskowski.pdf">On the Rules of Suppositions in Formal Logic</a> Studia Logica 1, pp. 5–32 (reprinted in: Storrs McCall (ed.), Polish Logic 1920-1939, Oxford University Press, 1967 pp. 232–258) <p>[8] Dag Prawitz (1965). "Natural Deduction : A proof theoretical study" (reissued 2006 Dover) <!-- The naming scheme was: </p><ul> <li>intuitionistic natural deduction NJ and classical natural deduction NK </li><li>intuitionistic sequent calculus LJ and classical sequent calculus LK </li></ul>-->\(\newcommand{\andintro}[0]{\color{blue}{(\wedge{\mathrm{\small INTRO}})}}\) \(\newcommand{\andelim}[0]{\color{blue}{(\wedge{\mathrm{\small ELIM}})}}\) \(\newcommand{\andelimleft}[0]{\color{blue}{(\wedge{\mathrm{\small ELIM/1}})}}\) \(\newcommand{\andelimright}[0]{\color{blue}{(\wedge{\mathrm{\small ELIM/2}})}}\) \(\newcommand{\orintro}[0]{\color{blue}{(\vee{\mathrm{\small INTRO}})}}\) \(\newcommand{\orelim}[0]{\color{blue}{(\vee{\mathrm{\small ELIM}})}}\) \(\newcommand{\implintro}[0]{\color{blue}{(\rightarrow{\mathrm{\small INTRO}})}}\) \(\newcommand{\implelim}[0]{\color{blue}{(\rightarrow{\mathrm{\small ELIM}})}}\) \(\newcommand{\notelim}[0]{\color{blue}{(\neg{\mathrm{\small ELIM}})}}\) \(\newcommand{\botelim}[0]{\color{blue}{(\bot{\mathrm{\small ELIM}})}}\) <h3><a name="intnj">Intuitionistic natural deduction NJ</a></h3> <p>We now look at the calculus Gentzen calls NJ, but for <i>propositional</i> logic. For notation, we use $\wedge$ instead of & for conjunction and $\rightarrow$ instead of $\supset$ for implication. Our formulae are defined as: </p><ul><li> $\bot$ is a formula ('falsehood') </li><li> a propositional variable $p$ is a formula, </li><li> if $A$ is a formula, so is $\neg A$ </li><li> if $A, B$ are formulae, so are $A \vee B$, $A \wedge B$ and $A \rightarrow B$. </li></ul><p></p> <p>We shall need to use inference rules and derivations. An <i>inference rule</i> has the form: $$ \frac{X_1 \quad \ldots \quad X_n}{Y} $$ </p> <p>with the meaning that $X_i$ are <i>premises</i> and $Y$ the <i>conclusion</i> of the rule. In natural deduction, each premise and each conclusion of an inference rules is simply a formula. A <i>derivation</i> is an arrangement of inference rules in tree form; the derived formula is at the bottom and inference rules connect each conclusion to premises above the bar. The derivations built up from (instance of) inference rules this way gives a proof of the formula at the root. </p> <p>The rules are as follows (the notation will be explained below): $$ \cfrac{A\quad B}{A \wedge B}~\color{blue}{(\wedge\mathrm{\small INTRO})} \quad \cfrac{A \wedge B}{A}~\color{blue}{(\wedge\mathrm{\small ELIM/1}}) \quad \cfrac{A \wedge B}{B}~\color{blue}{(\wedge\mathrm{\small ELIM/2})}$$ $$ \cfrac{A}{A \vee B}~\color{blue}{(\vee{\mathrm{\small INTRO/1}}}) \quad \quad \frac{B}{A \vee B}~\color{blue}{(\vee{\mathrm{\small INTRO/2}})} $$ $$ \cfrac{\begin{array}[cc]{} \\ {} \\ A \vee B \end{array} \quad \begin{array}[cc]{} [A] \\ ~~\vdots \\ \phantom{[}C\phantom{]} \end{array}\quad \begin{array}[cc]{} [B] \\ ~~\vdots \\ \phantom{[}C\phantom{]} \end{array} } {C}~\color{blue}{(\vee{\mathrm{\small ELIM}})} $$ $$ \cfrac{\begin{array}[cc]{} [A] \\ ~~\vdots \\ \bot \end{array} }{\neg A}~\color{blue}{(\neg{\mathrm{\small INTRO}})} \quad \cfrac{A \quad \neg{A}}{\bot}~\color{blue}{(\neg{\mathrm{\small ELIM}})} $$ $$ \cfrac{ \begin{array}[cc]{} [A] \\ ~~\vdots \\ \phantom{[}B\phantom{]} \end{array}}{A \rightarrow B} ~\implintro % % -> ELIM % \quad \quad \frac{A \rightarrow B \quad A}{B}~\implelim $$ $$ \cfrac{\bot}{A}~\botelim $$ </p> <p>Each rule preserves validity, which ensures that derivations will produce correct results (<i>soundness</i>). If one does not consider the proof-theoretical explanation of what an inference rule is as sufficient, one needs to talk about models and interpretations. <p> Moreover the collection of rules is systematic in a different way: if we ignore the duplicity of $\color{blue}{(\wedge{\mathrm{\small ELIM}})},\color{blue}{(\vee{\mathrm{\small INTRO}})}$ rules, we can say: for every logical operator $\neg, \wedge, \vee, \rightarrow$ there is one <i>introduction rule</i> and one <i>elimination rule</i>. In a sense, the introduction rule <i>defines</i> the meaning of the operator, and the elimination rule is ultimately a consequence of this definition: it removes precisely what was introduced! These are the "detours" in the structure of proofs, which can be removed. <p>In order to avoid unnecessary complication, we need to convene that in rule $\notelim$, the conclusion $A$ is always different from $\bot$. If we omit the rule $\notelim$, we end up with <i>minimal logic</i>. Let us now take two rules and read what they are saying: </p><ul><li> $\andelimleft$ is the (left) <i>elimination rule</i> for $\wedge$, and says: if you have a proof of $A \wedge B$, then you also have a proof of $A$ </li><li> $\andintro$ is the <i>introduction rule</i> for $\wedge$, and says: if you have a proof of $A$ and a proof of $B$, you may combine them into a proof of $A \wedge B$. </li></ul> </p><p><b>Assumptions.</b> As mentioned, a derivation may <i>start</i> in any formula $A$ at the leafs. In the course of the proof, the assumptions may be <i>discharged</i>, as seen in the inference rules $\color{blue}{(\rightarrow{\mathrm{\small INTRO}})}$, $\color{blue}{(\neg{\mathrm{\small INTRO}})}$ and $\color{blue}{(\vee{\mathrm{\small ELIM}})}$. The discharged assumption is then marked $[A]$ to indicate that the conclusion does not depend on it. The rule $\color{blue}{(\rightarrow{\mathrm{\small INTRO}})}$ says if you can prove $B$ from assumption $A$, then we have proven from $A$ follows from $B$, i.e. $A \rightarrow B$. Note that it is possible that there are several other, undischarged assumptions still present in the derivation, and that the discharged assumption may have occurred multiple times in the subderivation, with all those occurrences being discharged at once. </p><p>Let's go through an example proof of $X \vee (Y \wedge Z) \rightarrow (X \vee Y) \wedge (X \vee Z)$. For this, we will need a derivation that ends in $\implintro$. $$ \cfrac{ \begin{array}[cc]{} [X \vee (Y \wedge Z)] \\ ~~\vdots \\ (X \vee Y) \wedge (X \vee Z) \end{array} }{ X \vee (Y \wedge Z) \rightarrow (X \vee Y) \wedge (X \vee Z) }{\implintro} $$ For the upper part, we need $\orelim$. $$ \cfrac{ \begin{array}[cc]{} {\phantom{X}} \\ {\phantom{\vdots}} \\ X \vee (Y \vee Z) \end{array} \quad \begin{array}[cc]{} [X] \\ ~~\vdots \\ (X \vee Y) \wedge (X \vee Z) \end{array} \quad \begin{array}[cc]{} [Y \wedge Z] \\ ~~\vdots \\ (X \vee Y) \wedge (X \vee Z) \end{array} }{ (X \vee Y) \wedge (X \vee Z) }{\orelim} $$ The two branches are now easy: $$ \cfrac{\cfrac{X}{(X \vee Y)}\orintro\quad\cfrac{X}{(X \vee Z)}\orintro} {(X \vee Y) \wedge (X \vee Z)}{\andintro} $$ $$ \cfrac{\cfrac{\cfrac{Y \wedge Z}{Y}\andelimleft}{(X \vee Y)}\orintro\quad\cfrac{\cfrac{Y \wedge Z}{Z}\andelimright}{(X \vee Z)}\orintro} {(X \vee Y) \wedge (X \vee Z)}{\andintro} $$ </p><p><b>Intuitionistic treatment of negation.</b> What if we treated $\neg A$ as an abbreviation for $A \rightarrow \bot$? Then the the role of $\color{blue}{(\neg{\mathrm{\small INTRO}})}$ is played by $\color{blue}{(\rightarrow{\mathrm{\small INTRO}})}$ and the role of $\color{blue}{(\neg{\mathrm{\small ELIM}})}$ is played by $\color{blue}{(\rightarrow{\mathrm{\small ELIM}})}$. In fact, every derivation in the language with primitive $\neg$ could be rewritten into a derivation for the language where $\neg$ is an abbreviation. Neat! </p><p><b>Normal form.</b> A natural deduction proof is said to be in normal form if there is no detour, i.e. if there is no formula that is both the conclusion of an introduction rule and at the same time a major premise of an elimination rule. This is due to what Prawitz calls the <i>inversion principle</i>: whenever there is an elimination rule which operates on the result of an introduction rule above, then the results of that elimination rule were already present among the premises of the introduction rule. A normal derivation will - roughly - take apart the assumptions at the leaves with elimination rules and then build up the conclusion with introduction rules. (The $\orelim$ rule complicates this a little, since it passes through a formula $C$ from its minor premises; the proof by induction on the structure of derivations will transform such proofs by rewriting them and push the application of $\orelim$ downwards). <p><b>Curry-Howard correspondence.</b> As discussed <a href="https://blog.burakemir.ch/2020/04/higher-order-logic-and-equality.html">last time</a>, we can <i>assign</i> terms of simply-typed $\lambda$-calculus to these derivations. Propositions of IPC are then exactly the types of simply-typed $\lambda$-calculus, with $\wedge$ product types and $\vee$ for sum (coproduct) types. Or, from the vantage point of logic, $\lambda$-terms are <i>proof terms</i>. Normalization of proofs corresponds to normalization of $\lambda$-terms; $\beta$-reduction corresponds to removal of a detour involving $\implintro$ and $\implelim$. </p><p>Still on Curry-Howard, it's a bit less obvious how to interpret $\botelim$ as a typing rule; the answer is that it needs an $\mathsf{abort}$ constant which (similar to throwing an exception) passes control to an imaginary top level function that never terminates. This and more is explained succinctly in Zena Ariola's slides [8], but I do want to point out how this yields a neat programmer's intuition for the convention regarding $\botelim$: since obviously an program $\mathsf{abort}(\ldots(\mathsf{abort}(t)\ldots)$ could always be replaced by a simple $\mathsf{abort}(t)$. </p> <p>[8] Zena Ariola. <a href="https://drive.google.com/file/d/1DQ9uqn_ev6h7Nee9dhiFu1qgNUdO5sxC/view">The interplay between logic and computation</a>. slides from a talk at "Women in Logic 2019". </p> <h3><a name="natseq">Natural deduction in sequent calculus style</a></h3> <p>There is something odd about the above presentation: we have propositions at the leaves of a derivation which are <i>temporarily</i> treated as assumptions and then cease to be assumptions when they are discharged. It is also possible to not discharge them, or discharge only some of them when making a proof step. Here, we present an alternative formulation that makes it clear what happens with assumptions at every inference step. <p>(I thought for a long time that it was type theorists or programming language researchers who invented this, suspecting typographical reasons, but to my surprise I learned that it was in fact Gentzen [9] himself in 1936 who introduced this alternative in the paper where he proved the consistency of Peano arithmetic using transfinite induction.) <p>We shall present now intuitionistic natural deduction "in sequent calculus style". Derivations are still tree-shaped, however instead for formulas, premises and conclusions are <i>sequents</i> of the form $\Gamma \vdash A$ where $\Gamma$ is a (possibly empty) <i>set</i> of formulae. The meaning of a sequent is "from assumptions $\Gamma$, one can (intuitionistically) prove $A$". We write the union of such sets with a comma, omit braces around singleton sets and treat $\neg A$ as abbreviation for $A \rightarrow \bot$ so we don't have to give any additional rules. Then, the rules look like this: $$ \cfrac{\Gamma \vdash A\quad \Delta \vdash B}{\Gamma, \Delta \vdash A \wedge B}~\color{blue}{(\wedge\mathrm{\small INTRO})} \quad \cfrac{\Gamma \vdash A \wedge B}{\Gamma \vdash A}~\color{blue}{(\wedge\mathrm{\small ELIM/1}}) \quad \cfrac{\Gamma \vdash A \wedge B}{\Gamma \vdash B}~\color{blue}{(\wedge\mathrm{\small ELIM/2})}$$ $$ \cfrac{\Gamma \vdash A}{\Gamma \vdash A \vee B}~\color{blue}{(\vee{\mathrm{\small INTRO/1}}}) \quad \quad \frac{\Gamma \vdash B}{\Gamma \vdash A \vee B}~\color{blue}{(\vee{\mathrm{\small INTRO/2}})} $$ $$ \cfrac{\Gamma \vdash A \vee B \quad\quad \Delta, A \vdash C \quad\quad \Theta, B \vdash C}{\Gamma, \Delta, \Theta \vdash C}~\color{blue}{(\vee{\mathrm{\small ELIM}})} $$ $$ \cfrac{\Gamma, A \vdash B}{\Gamma \vdash A \rightarrow B}~\implintro % % -> ELIM % \quad \quad \cfrac{\Gamma \vdash A \rightarrow B \quad \Delta \vdash A}{\Gamma, \Delta \vdash B}~\implelim $$ $$ \cfrac{\Gamma \vdash \bot}{\Gamma \vdash A}~\botelim $$ </p> <p>These rules make it clear that it is now "compulsory" to discharge assumptions. When a rule has multiple premises, the assumptions are combined. By choosing broad enough set of assumptions ahead of time (weakening) the different sets of assumptions can be assumed to be the same. When moving towards sequent calculus, one can do away with the treatment of assumptions as sets and treat them as ordered sequences but with <i>structural rules</i> like exchange and contraction; removing one or both structural rules then yields sub-structural logic and proof theory. <p>[9] Gerhard Gentzen (1936) "<a href="https://gdz.sub.uni-goettingen.de/id/PPN235181684_0112?tify={%22pages%22:[497],%22panX%22:0.469,%22panY%22:0.762,%22view%22:%22info%22,%22zoom%22:0.362}">Die Widerspruchsfreiheit der reinen Zahlentheorie</a>" <h3>Conclusion</h3> <p>We are at the end of our tour; I hope this can provide a good starting point for studying the multiple connections between $\lambda$-calculus, logic and proof theory. <p>Sound reasoning - preferably automated - and some form of formal logic and proof are going to be key when modeling some domain and constructing correct software for it. Programmers are used to deal with precise formulations (programming languages, specs) but only a subset will be interested in the "formalization of all mathematics"; a programmer with a basic understanding of simply-typed $\lambda$-calculus should ideally have a very clear path to understanding logic, proof assistants, type theory - I don't think that this is the case today. <p>The activities in constructing models and software may be similar to mathematics, but they are certainly not the same. Each domain is a mathematical universe, with random and often contradictory pieces of information waiting to be organized in "islands of consistency". Hopefully we will one day reach some widespread understanding and usable tool support for formal reasoning that can serve both practical informatics as well as formalizing mathematics. <p>[If you want to discuss, the <a href="https://news.ycombinator.com/item?id=23365982">HN thread</a> is a great place to leave comments] Burak Emirhttp://www.blogger.com/profile/01651698653168460072noreply@blogger.com0tag:blogger.com,1999:blog-28320851.post-56269490594370324752020-04-02T16:32:00.001-07:002020-07-01T08:10:43.147-07:00Higher-order Logic and Equality.<p>In this post, I want to sketch a particular perspective on $\lambda$-calculus and higher-order logic and Church's simple theory of types. I have a few motivations to write this up. One of them is that as an undergrad (ages ago), upon encountering first-order predicate logic, I had endlessly wondered why one couldn't use "boolean functions" and something like functional programming for logic. It was only much later that I discovered Church's 1940 typed $\lambda$ calculus was in a sense, just that. <p>When people think of the connection between $\lambda$-calculus and logic, the first thing that comes to their mind may well be the Curry-Howard correspondence. Similarly, the first thing that people may think of when hearing type theory is Martin Löf type theory. So another motivation is to have a place that describes higher-order logic a.k.a. simple type theory - which after all was the first appearance of typed $\lambda$ calculus - and how there are multiple ways to relate it to formal logic and proofs. <h3>Preliminary: Natural Deduction</h3> <p>Natural deduction [<a href="https://en.wikipedia.org/wiki/Natural_deduction">wikipedia</a>], in short, is a system of reasoning involving inference rules to derive new true statements from existing ones. What sets them apart from a Hilbert system [<a href="https://en.wikipedia.org/wiki/Hilbert_system">wikipedia</a>] is that logical operators themselves are described using inference rules, proofs are trees (derivations) formed from such rule. <p>In this journey, we present natural deduction rules in sequent calculus style: assumptions are tracked in a context $\Gamma$. <h3>Minimal Logic</h3> <!--In proof theory, it is the structures of the proofs that are the topic of interest. --> <p>Minimal logic [<a href="https://en.wikipedia.org/wiki/Minimal_logic">wikipedia</a>], also referred to as the positive fragment of intuitionistic logic, is a deductive system for propositional logic that does not contain any inference rule for falsehood. Here are the rules: $$ \frac{}{\Gamma, A \vdash A}\quad\quad \frac{\Gamma, A \vdash B}{\Gamma \vdash A \rightarrow B} $$ $$ \frac{\Gamma \vdash A \rightarrow B \quad \Gamma \vdash A}{\Gamma \vdash B}$$ We could have added conjunction $\wedge$ or disjunction $\vee$ or a symbol $\bot$ to stand for a false proposition, along with the convention that $\neg A = A \rightarrow \bot$; it would still be minimal logic. "Full" intuitionistic logic has a rule that goes from $\bot$ to an arbitrary proposition $A$ (<i>ex falso quodlibet</i>); and other rules, like double negation elimitation, get us classical logic. <p>The naming scheme that calls this "minimal" is consistent with Dag Prawitz's "Natural Deduction: A Proof-Theoretical Study." Yet, I believe it is fairly common to call this kind of logic intuitionistic logic, even though there is no rule for $\bot$. <p> The proofs for this logic have a structure that is familiar to all students of functional programming languages. <!-- % % % simply typed lambda calc % % --> <br> <h3>Simply typed $\lambda$-calculus</h3> <p>We now follow the celebrated "propositions-as-types" or Curry-Howard correspondence and assign names (variables) to assumptions, terms (expressions) to proofs, using this language: $$ s, t ::= x \enspace \vert \enspace (\lambda x: T. t) \enspace \vert \enspace(s\ t) \enspace $$ $$ T ::= b_i \enspace \vert \enspace T \rightarrow T $$ <p>Here $b_i$ are some given base types. Our <b>annotated</b> inference rules: $$ \frac{}{\Gamma, x:A \vdash x:A}\quad\quad \frac{\Gamma, x:A \vdash s:B}{\Gamma \vdash (\lambda x:A. s) : A \rightarrow B} $$ $$ \frac{\Gamma \vdash s:A \rightarrow B \quad \Gamma \vdash t:A}{\Gamma \vdash (s\ t):B}$$ A term encodes the entire derivation (or natural deduction proof) of the proposition/type. <h3>Reduction as proof normalization</h3> <p>From the Curry-Howard angle, functional programming is "proof normalization": derivations that contain redundant steps get <b>simplified</b> to <i>smaller</i> derivations. The redundancy is always of the form that part of the derivation needs an assumption $x:A$, and another part of the derivation actually "produces" a term of type $A$. In more detail, consider a derivation $\Gamma \vdash (\lambda x:A. s): A \rightarrow B$ and a derivation $\Gamma \vdash t: A$, which can be combined to yield $\Gamma \vdash ((\lambda x:A. s)\ t) : B$. The detour can be removed by taking every occurrence of $x$ within $s$ and replacing it with $t$. This yields a <i>smaller</i> derivation $\Gamma \vdash s[t/x] : B$. If two derivations stand in this relation, we call them $\beta$-equivalent. $$ ((\lambda x:A. t)\ s) \approx_\beta s[t/x] $$ <p>Here $[t/x]$ is notation for substituting $t$ for $x$, and we hurry to point out that care must be taken that we may need to rename bound variables. Two terms that only differ in the names of their bound variables are $\alpha$-equivalent and we will pretend they are <i>equal</i>. If we used <a href="https://en.wikipedia.org/wiki/De_Bruijn_index">de Bruijn indices</a>instead of variable names, there would be no need for $\alpha$-equivalence. <p>[Digression: the identity above is <b>not</b> an algebraic identity - these are syntactic objects which involve variable binding. Sometimes it is ok to treat lambda-calculi as an algebra, and this leads to combinatory logic, see <a href="https://cstheory.stackexchange.com/questions/9993/what-is-the-point-of-calling-lambda-calculus-an-algebra">neelk's answer on stackexchange</a> and Selinger's <a href="https://www.mscs.dal.ca/~selinger/papers/combinatory.pdf">paper</a>.] <p>If we give a direction to the identity, we obtain $\beta$ reduction: $$((\lambda x:A. t) s) \Rightarrow_\beta s[t/x]$$ We can show that $\Rightarrow_\beta$ is <b>strongly normalizing</b> and therefore terminating reduction relation, and that $\Rightarrow_\beta$ is confluent (Church-Rosser property), so all reductions end up in the same normal form. This yields a decision procedure for the "word problem": to find out whether $s \approx_\beta t$, reduce both to normal forms and check that they are <i>equal</i>. <!--Most of the time, the relation is written $=_\beta$ and contained in equivalence relation which is called "definitional equality".--> <!-- % % % higher order logic % % --> <p>We now shift our perspective, leave the Curry-Howard correspondence aside and look at the theory of <i>equality</i> between these expressions. \(\newcommand{\qeq}[0]{\color{blue} {\mathrm{eq}}}\) <h3>Higher-order Logic, using Equality alone</h3><p>We are going to build a calculus for classical logic, based on our language. Our development here follows the $Q_0$ type theory (due to Peter Andrews). This is a type theory which is close to Church's 1940 simply-typed $\lambda$ calculus, but uses only equality and a description operator. We will omit the description operator. <p>We need two base types: $\iota$ for "individuals" and $\omicron$ for "propositions". Since the logic is classical, every proposition will be either true of false, so $\omicron$ can be seen as the type these "truth values". With this setup, we can view any term of type $\iota \rightarrow \omicron$ as describing a <i>set of individuals</i>, or alternatively, a predicate. This resembles the characteristic function of a set in mathematics and also what a programmer calls a Boolean function. <p>[ The name "higher-order" is appropriate, since this logic lets us also talk about set of sets of individuals, via terms of type $(\iota \rightarrow \omicron) \rightarrow \omicron$ and so on. This use also matches the terminology "higher-order function", which is not a coincidence. Note that you can quantify over "smaller-order predicates", but you can't express a "set of all sets."] <p>The only built-in/primitive notion is a logical constant $\qeq_\alpha: \alpha \rightarrow \alpha \rightarrow \omicron$ which serves as equality operator for any type $\alpha$. Before we build propositions out of logical operators and quantifiers, let us write down types of such operators, with the common convention that $\alpha \rightarrow \beta \rightarrow \gamma$ stands for $\alpha \rightarrow (\beta \rightarrow \gamma)$ the types would be: $$ \newcommand{\qtrue}[0]{{\color{blue} {\mathrm{true}}}} \newcommand{\qfalse}[0]{{\color{blue} {\mathrm{false}}}} \newcommand{\qnot}[0]{{\color{blue} {\mathrm{not}}}} \newcommand{\qor}[0]{\color{blue} {\mathrm{or}}} \newcommand{\qand}[0]{\color{blue} {\mathrm{and}}} \newcommand{\qimplies}[0]{\color{blue} {\mathrm{implies}}} \begin{align*} \qtrue:\ & \omicron\\ \qfalse:\ & \omicron\\ \qnot:\ & \omicron \rightarrow \omicron\\ \qand:\ & \omicron \rightarrow \omicron \rightarrow \omicron\\ \qor:\ & \omicron \rightarrow \omicron \rightarrow \omicron\\ \qimplies:\ & \omicron \rightarrow \omicron \rightarrow \omicron\\ \end{align*} $$ Using again a notational convention that $((r~s)~t)$ is written $r~s~t$, the tautology $A \vee B \rightarrow B \vee A$ is expressed as $\qimplies~(\qor~A~B)~(\qor~B~A)$. <p>What about quantifiers? Since we have $\lambda$, for variable binding, there is no need for other variable binders like $\forall$ or $\exists$. Instead we introduce universal and existential quantification using two higher-order operators: $$ \newcommand{\qPi}[0]{\color{blue} \Pi} \newcommand{\qSigma}[0]{\color{blue} \Sigma} \begin{align*} \qPi_\alpha:\ & (\alpha \rightarrow \omicron) \rightarrow \omicron\\ \qSigma_\alpha:\ & (\alpha \rightarrow \omicron) \rightarrow \omicron\\ \end{align*} $$ So $\forall x .\exists y. x = y$ would be expressed as $\qPi \lambda x. \qSigma \lambda y. \qeq~x~y$. People studying classical logical systems frequently "apply Occam's razor" and try to define as few thing as possible. The $Q_0$ way is to play Occam's razor as much as possible. All logical operators are "implemented" using only $\qeq$. $$ \newcommand{\tprop}[0]{\omicron} \begin{align*} \qtrue &:= \qeq_{\tprop \rightarrow \tprop \rightarrow \tprop} \qeq_\tprop \qeq_\tprop\\ \qfalse &:= \qeq_{\tprop \rightarrow \tprop} (\lambda x:\tprop. \qtrue) (\lambda x:\tprop. x)\\ \qnot &:= \qeq_{\tprop \rightarrow \tprop} \qfalse\\ \qand &:= \lambda x:\tprop. \lambda y:\tprop. \\ &\quad \quad \qeq_{\tprop \rightarrow \tprop \rightarrow \tprop} \\ &\quad \quad \quad (\lambda z: \tprop \rightarrow \tprop \rightarrow \tprop. z~\qtrue~\qtrue) \\ &\quad \quad \quad (\lambda z: \tprop \rightarrow \tprop \rightarrow \tprop. z~x~y) \\ \qor &:= \lambda x:\tprop. \lambda y:\tprop. \\ &\quad \quad \qnot~(\qand~(\qnot~x)~(\qnot~y))\\ \qimplies &:= \lambda x:\tprop. \lambda y:\tprop. \qeq~x~(\qand~x~y)\\ \qPi_\alpha &:= \qeq_{(\alpha \rightarrow \tprop) \rightarrow (\alpha \rightarrow \tprop) \rightarrow \tprop}~(\lambda x : \alpha. \qtrue)\\ \qSigma_{\alpha} &:= \lambda x: \alpha \rightarrow \tprop. \qnot~~\qPi_{\alpha}~~\qnot~x.\\ \end{align*} $$ <p>One advantage of having everything defined in terms of $\qeq$ is that when we give inference rules to explain how reasoning take place, they all involve only $\qeq$. We are using the notation of inference rules to describe an equational theory. The only assertions $\Gamma \vdash r$ that are derived are (more or less - up to $\beta$-equivalence) of the form $\Gamma \vdash \qeq_\alpha~s~t$, but we explicitly write some assumptions that we make about types. Here they are: $$ \frac{\Gamma \vdash s : \tprop \rightarrow \tprop}{\Gamma \vdash \qeq_\tprop~(\qand~(s~\qtrue)~(s~\qfalse))~(\qPi_\tprop~\lambda x : \tprop.~s~x)} $$ $$ \frac{\Gamma \vdash s : \alpha \quad \Gamma \vdash t:\alpha \quad \Gamma \vdash z: \alpha \rightarrow \tprop}{\Gamma \vdash \qimplies~(\qeq_\alpha~s~t)~(\qeq_\tprop~(z~s)~(z~t))} $$ $$ \frac{\Gamma \vdash s : \beta \rightarrow \alpha \quad \Gamma \vdash t:\beta \rightarrow \alpha}{\Gamma \vdash \qeq~(\qeq_{\beta \rightarrow \alpha}~s~t) (\qPi_\alpha \lambda x:\alpha.\qeq_\alpha~(s~x)~(t~x))} $$ $$ \frac{}{\Gamma \vdash \qeq~((\lambda x : \alpha. s)~t)~(s [t/x])} $$ $$ \frac{\Gamma, x: \alpha \vdash r : \beta \quad \Gamma \vdash r[A/x] \quad \vdash \qeq_\alpha~A~B}{\Gamma \vdash r[B/x]} $$ <!-- % % % Explanation why this is a logic % % --> <p>The first rule is an axiom that says $\tprop$ consists of $\qtrue$ and $\qfalse$. This looks like a data type specification for $\tprop$. The second rule is an axiom that says that if two objects are equal then they have the same properties (Leibniz would say "indiscernibility of identicals"). The third rule is an axiom that says two expressions of function type are equal, if they agree on all arguments; this is the extensional definition of function equality. <p>The fourth rule is an axiom for $\beta$-conversion. We're conveniently hiding all the naming stuff again behind $[t/x]$ notation; but essentially this lets us apply $\beta$-reduction as we please. <p>The fifth rule is a proper inference rule - the only actual inference rule, which has a premiss $\Gamma \vdash r$ - that says "equals can be replaced by equals", anywhere in a term. This is where the actual reasoning happens. For example, a proposition $\forall x:\alpha. P$ is encoded as a $\qPi \lambda x:\alpha. P$ and proving it "within the system" is literally showing that $\lambda x:\alpha. P$ is <i>equal</i> to $\lambda x:\alpha. \mathsf{true}$. <h3>Wrapping up</h3> <p>I presented in a notation/style that is different from both Andrews's book or the Stanford Encyclopedia of Philosophy's entry on <a href="https://plato.stanford.edu/entries/type-theory-church/">Church's type theory</a>. It is closer to the way a typed programming languages would be formulated. <p>We have only scratched the surface, but sketched how a whole formal system of reasoning can indeed be based on $\lambda$ calculus playing the role of characteristic function. And it didn't actually involve Curry-Howard correspondence. I like to think of this situation as "multiple levels of logic," and topics like higher-order unification and meta-logic/logical frameworks make use of this. <p>I hope these notes can help raise awareness for what is going when simple type theory and higher order logic. A lot more can be said, but that is for another post. If you are interested to dive deeper, here are some references that talk about higher-order logic. <p> Stanford Encyclopedia of Philosophy's entry on <a href="https://plato.stanford.edu/entries/type-theory-church/">Church's type theory</a> <p> Jean Gallier. "Constructive Logics. Part I: A Tutorial on Proof Systems and Typed λ-Calculi", freely accessible on <a href="https://www.cis.upenn.edu/~jean/gbooks/logic.html">his logic papers page</a>. <p> William M. Farmer "The Seven Virtues of Type Theory" <a href="https://doi.org/10.1016/j.jal.2007.11.001">https://doi.org/10.1016/j.jal.2007.11.001</a></p><p> Peter Andrews "An Introduction to Mathematical Logic and Type Theory: to Truth through Proof" (2002) <p> Alonzo Church. "A Formulation of the Simple Theory of Types". J. Symb. Logic, 5 (1) (1940) <p> Dag Prawitz. "Natural deduction: A Proof-Theoretical Study" (2006) Dover - new issue of the celebrated 1965 work. Burak Emirhttp://www.blogger.com/profile/01651698653168460072noreply@blogger.com2tag:blogger.com,1999:blog-28320851.post-73555573722850273262019-11-26T06:46:00.000-08:002019-11-26T06:46:02.598-08:00Getting from TikZ to PNGFor things like papers, tutorials, books, LaTeX is still where it's at, but for HTML I'd like to be able to do things like commutative diagrams... how to make this work without duplicating effort? My LaTeX foo has become rusty over the years, but despite much searching, there seems to no alternative. So here is a little note on how to TikZ pictures to PNG.<br /><h2>TikZ for drawing diagrams</h2>TikZ (TikZ is kein Zeichenprogramm) is probably the most powerful LaTeX package to graphics. There are supporting packages for commutative diagrams.<br /><br /><h2>Exporting TikZ to PNG</h2>After some searching I found the following way: use the "standalone" document class...<br /> <pre><br />\documentclass{standalone}<br />\usepackage{tikz}<br />\usetikzlibrary{cd}<br />\begin{document}<br />\begin{tikzcd}<br /> & X \arrow[dl] \arrow[d, dotted] \arrow[dr] & \\<br /> A & A \times B \arrow [l] \arrow{r}& B<br />\end{tikzcd}<br />\end{document}<br /></pre> ... together with the convert tool (imagemagick): <pre><br />product.png: product.tex<br /> pdflatex $<<br /> convert -density 600x600 product.pdf \<br /> -quality 90 -resize 1080x800 -strip $@<br /></pre> <h2>The product</h2> And this is what we get: <br /> <div class="separator" style="clear: both; text-align: center;"><a href="https://4.bp.blogspot.com/-zTmCx9HmQP8/Xd05cKpFJJI/AAAAAAAAZqM/LPaEBaHYpkM5uQIwnqBLW2zB8ovYc1GIACNcBGAsYHQ/s1600/product.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" src="https://4.bp.blogspot.com/-zTmCx9HmQP8/Xd05cKpFJJI/AAAAAAAAZqM/LPaEBaHYpkM5uQIwnqBLW2zB8ovYc1GIACNcBGAsYHQ/s320/product.png" width="320" height="129" data-original-width="1080" data-original-height="434" /></a></div>Burak Emirhttp://www.blogger.com/profile/01651698653168460072noreply@blogger.com0tag:blogger.com,1999:blog-28320851.post-31997124264281864172018-05-10T08:11:00.002-07:002018-05-10T08:11:15.616-07:00Taking a Glance at Containers, Packages and DevelopmentTime to dust off this blog. <p>Long time ago, I had worked on (yet another) open source implementation of a RPC plugin (code generation + client/server framework) for handling the "services" that are part of protobuf. I had briefly looked at existing C++ solutions, but I remember I primarily cared about finding out what it would take. I looked for ages for a suitable build tool, ultimately settling on gyp and ninja. And then, I still needed to handle dependencies. <p>At some point I must have lost interest, many years have passed, <a href="https://bazel.build">Bazel</a> was meanwhile open-sourced. What (subjectively) changed more than anything else was that the world has moved on, docker containers, cloud setups are becoming more the standard than the norm, the word "DevOps" was invented, kubernetes is a thing. So people must have a way to deal with APIs. There is something called <a href="https://swagger.io">Swagger</a> which can document APIs and also generate client and server-side code, with many languages and targeting language-specific frameworks... <h2>Docker for Development</h2> <p>Apart from the revolution that this means for operations and delivery, this led me to think that a key development story is addressed this way, namely declaring dependencies and integrating them into the build process. Indeed I found a <a href="https://medium.com/travis-on-docker/why-and-how-to-use-docker-for-development-a156c1de3b24">nice article by Travis Reeder</a> that describes exactly the advantages, along with helpful examples and command line references. <h2>Build tools, Modularity and Packages</h2> Modularity is pretty much essential to serious development. I remember from university times that academic view on modularity seemed primarily from an angle of programming languages (in the old times, "separate compilation", later, through type systems). Nowadays, nobody considers providing a programming language implementation in isolation, for instance Rust has the notion of <a href="https://doc.rust-lang.org/book/first-edition/crates-and-modules.html">crates and modules</a>, and what's more is that the cargo tool to manage crates is mentioned in the tutorial. npm would be another good example. In Java-land, there is a distinction between libraries on the class path and managing external dependencies using build tools like maven or gradle. <p>Just to state the obvious: the old view of considering modularity maybe as a low-level problem (linking and type systems) ignores the fact that a "modern" build tool has to handle package/module/crate repositories (multiple of them). On the other hand, the emergence of such repositories (plural) has its own problems, when "left-pad" package was pulled and broke many people. "<a href="https://www.google.ch/search?q=npm+disaster">npm disaster</a>" is an entertaining search query, but it does lead to not-easily-resolved questions on the right granularity of modules/packages. <h2>Image or Container?</h2> With Docker as an emerging "modularity mechanism", it is very interesting to observe that in conversation (or documentation) people confuse docker image with a docker container. If you are confused, <a href="https://nickjanetakis.com/blog/differences-between-a-dockerfile-docker-image-and-docker-container">this post</a> will get you on the right path again. <p>There is a parallel between a "module" as something that can be built and has an interface, and "module" as something that is <b>linked</b> and part of a running system. Since a container is a whole system in itself, networking is part of its interface. When building a larger system from containers as building blocks, the problem of connecting these requires tackling "low-level" configuration of an entirely different kind (see <a href="https://kubernetes.io/docs/concepts/cluster-administration/networking/#docker-model">Kubernetes docs</a>) than what "module linking" meant in the 80s. Still, I'd consider this a parallel. <h2>Where are the best practices?</h2> A <a href="https://www.google.ch/search?q=site:github.com+"Dockerfile+at+master">quick search</a> reveals that indeed there are 53k+ projects have Dockerfiles, including e.g. <a href="https://github.com/tensorflow/tensorflow/tree/master/tensorflow/tools/docker">tensorflow</a>, which explicitly mentions deployment and development Docker images. So while initially, I thought I had not slept through much essential changes, it seems on the contrary that things are happening at an entirely different level. What conventions should one follow, what are the pitfalls? <p>Fortunately for me, there are smart people good at posting articles about these things, so I can appreciate the problems and tentative solutions/best practices mentioned in <a href="https://www.weave.works/blog/kubernetes-best-practices">Top 5 Kubernetes Best Practices</a>. Hat-tip to the "Make sure your microservices aren’t too micro" advice. <h2>D.C. al Coda</h2> How can there be an end to this? Containerization seems to be a perpetual journey. At this point, I would have liked to offer tips based a "typical" example of a development project and how one would use docker to set up. However, I am not very active in terms of open source development, and the one project I want to get back to most isn't a developer tool at all. Fortunately, I believe best practices for "docker for development" aren't going to be any different from the best practices for using docker in general. <p>There is an interesting point where, in order to build a container image, one has to still solve the problem of gathering the dependencies. So the problem of gathering dependencies at build time has only been shifted on level (up? down?), though the constraints are very different: an organization can find easy ways to make docker images available to their developers, and only the team that takes care of providing the docker image has to gather the dependencies. This appears to me much more sane than having a development workflow that makes connections to external repositories on the developers box at build time. Burak Emirhttp://www.blogger.com/profile/01651698653168460072noreply@blogger.com0tag:blogger.com,1999:blog-28320851.post-15341213256207916862015-06-19T18:30:00.000-07:002015-06-19T18:30:05.703-07:00Web Components and Dart Frameworks<p>Recently, I visited the scientific software and data management (SSDM) group at ETH to give a talk about web components and dart frameworks. Here is a summary. <h2>Looking back: a brief History of Web Apps</h2> We live in interesting times: there are many frameworks, languages out there, and in order to understand the context and why things are how they are, it helps to look at the history. The web as a platform for applications has evolved in an erratic process, which I will divide into three broad phases: <ul> <li> the <b>"Middle ages"</b>: content generated on the server-side, Java born with the promise of applets, Javascript launched as a tool for web designers (read: non-programmers), browser wars [1]. Client-side coding was a matter of achieving visual effects, client-server communication happened through forms and page-loads. <li> the <b>"Renaissance"</b>: asynchronous Javascript[2] becomes ubiquitous, client-server communication is now decoupled from forms and page-loads. Rich client-side apps like GMail or Google Maps demonstrate its use. <li> <b>"Modern History"</b>: high-performance layout and javascript engines are the norm. <a href="https://en.wikipedia.org/wiki/HTML5">HTML5</a> delivers a reinterpretation of browser documents and APIs, tailored to web applications development. Mobile devices get high resolution screens and useful browsers. </ul> <p>HTML5 continues to evolve, and not all HTML5 technologies are not supported on every browser (so developers check on <a href="http://caniuse.com">caniuse.com</a>). The idea of a <b><a href="https://remysharp.com/2010/10/08/what-is-a-polyfill">Polyfill</a></b> is born: adapter code that simulates functionality that does not exist natively in the browser. <h2>Web Components</h2><p>With application development on the web becoming the norm, the traditional questions of software engineering arise: how to approach development of large systems? how to encapsulate and reuse? Web components [3,4] offer an answer. </p><p>Web Components are four technologies that together achieve reusable units of behavior: <ul> <li>custom elements</li> <li>templates</li> <li>shadow DOM</li> <li>HTML imports</li></ul> <p>These technologies are being standardized and implemented in a few browser. Even when not available natively, there is the possibility of using polyfills. </p><p>As with any web technology, there are good documentation resources available on the web that explain these. In the following, I'll just include a simple example for completeness.</p> <h3>Defining a new custom element</h3> <p>Say we want to write a bug-tracking system that displays a number of bug entries. We'd like to treat the entries as we would treat other HTML elements (note the fancy new HTML5 syntax): <pre><br /> <bug-entry bug-id=1e4f bug-status=open><br /></pre> <p>This is possible with custom elements, we'd want to register the element like so: <pre><br />// Create a new object based of the HTMLElement prototype.<br />var BugEntryProto = Object.create(HTMLElement.prototype);<br /><br />// Set up the element.<br />BugEntryProto.createdCallback = function() {<br /> ...render bug entry, e.g. with img, click handler/link<br />}<br />// Register the new element.<br />var BugEntry = document.registerElement('bug-entry', {<br /> prototype: BugEntryProto<br />});<br /></pre><h3>Filling the element with content</h3> Most of the time, custom elements will actually correspond to some other markup. With HTML templates, it is possible to include template markup in documents and instantiate it. <pre><br /><template id=”bug-entry-template”><br /> <tr><td><img class=”bug-image”></td><br /> <td><div class=”bug-title”></td></tr><br /></template><br /><br />BugEntryProto.createdCallback = function() {<br /> … <br /> var template = document.querySelector('#bug-entry-template');<br /> var clone = document.importNode(template.content, true);<br /> …<br />}<br /></pre><h3>Specifying layout and style of the content</h3>In HTML, layout and style is specified using CSS, however the specification is global. In order to have a self-contained, encapsulated unit of code, we need some mechanism that enables reuse without worrying about style. This is what is achieved with the shadow DOM: content can be added to the DOM in a way that keeps it separate from the rest of the DOM. <pre><br />BugEntryProto.createdCallback = function() {<br /> <b>// create shadow DOM <br /> var root = this.createShadowRoot();</b><br /><br /> // Adding a template<br /> var template = document.querySelector('#bug-entry-template');<br /><br /><br /> var clone = document.importNode(template.content, true);<br /> …<br /> <b>root.appendChild(clone);</b><br />}<br /><br /></pre><h3>Importing the new component</h3>The sections above have defined a component. Now we would like to be able to use the component. HTML imports provide a way to import components easily: <pre><br /><link rel="import" href="bug-entry.html"><br /></pre> <h2>Dart Frameworks</h2> <h3> Polymer </h3>The Polymer.js library provides many things: <ul><li>Polyfills to get web components in browsers that don't support them <li>Many ready-to-use components that follow Material Design UX specifications</li><li>Tools like vulcanize that translate away HTML imports for the purposes of deployment</li></ul> <p>The polymer.dart library is a wrapper around polymer.js that makes the polyfill as well as the components available in dart. It also offers an easy, integrated syntax which is mainly possible because of reflection. Instead of using vulcanize, users of polymer.dart rely on the polymer transformer which translates away the use of reflection and also takes care of combining all the imported components into one unit of deployment. <p>There is a bottom-up development approach suggested by the availability of polymer: start building basic components and assemble them into every larger ones until you have an application. It is important to note that polymer is not an application development framework: things like logical pages, views, routing are not standardized, but can be provided by components. <h3> Angular, Angular.dart and Angular2 </h3> Angular existed before web components and polymer. It was always a framework for application development, so provides facilities for pages, views, routing. The dart version of angular is significantly different from the js version, again because the use of reflection permits a nice style that uses annotations. Angular does not work seamlessly with web components defined in polymer, although it is possible to make the two things work with a bit of glue code. <p>The newer version, angular2, is backwards incompatible with angular (though concepts are similar). It aims to provide seamless integration of web components (as defined by the standard). It is noteworthy that annotations are advocated as the primary means to set up angular2 components. <h2>Conclusion</h2> Web development has become even more interesting with the promise of reusable, encapsulated components. Work on these is still in progress, but I for one am looking forward to more structured web applications that can draw from a large set of reusable components. <h2>References</h2> [1] <a href="http://www.infoworld.com/article/2653798/application-development/javascript-creator-ponders-past--future.html">Brendan Eich interview in InfoWorld, 2008</a><br>[2] <a href="https://en.wikipedia.org/wiki/Ajax_(programming)">https://en.wikipedia.org/wiki/Ajax_(programming)</a><br>[3] <a href="https://developer.mozilla.org/en-US/docs/Web/Web_Components">Web Components description in Mozilla Developer Network</a><br>[4] <a href="https://en.wikipedia.org/wiki/Web_Components">https://en.wikipedia.org/wiki/Web_Components</a><br>[5]<a href="http://jonrimmer.github.io/are-we-componentized-yet/"> Are we componentized yet?</a>Burak Emirhttp://www.blogger.com/profile/01651698653168460072noreply@blogger.com0tag:blogger.com,1999:blog-28320851.post-61156499459496580992015-03-31T14:47:00.005-07:002020-12-22T05:36:54.287-08:00Educational Programming Environments for ChildrenIf one considers programming a passion, then one can endlessly search for the "essence" of programming. Sooner or later, one faces the question: how can something as abstract as programming be taught to children? <b>What</b> will we teach our children? <p>This is a broad topic, so I will focus on describing some programming environments that I saw being used at educational events at the Google Zurich office. The usual disclaimer applies: the opinions herein reflect only my personal point of view, not necessarily that of Google. <h2>Background and Setup</h2> <p>At the Google Zurich office, we occasionally organize events that teach (or rather: expose) programming to children. The occasion / program and audience were slightly different each time but what is common is the goals of supporting STEM education, closing the gender gap in CS, and at one of the events also giving kids an impression of what their parents are doing ... basically showing them that this stuff is fun. Note that this wasn't a controlled experiment of any sort - my comparison of the environments we used is more an afterthought. <p>I have participated in a few of these as facilitator and seen how three educational programming environments were applied in practice and I want to share a quick overview in the hope that it is useful for future events of this kind. These were: <ul><li>Logo,</li><li>Agent Cubes, and </li> <li>Scratch.</li> </ul> <p>All three environments were used in their online variant in the context of educational events organized for groups of kids: 30-40 Chromebooks and a bunch of volunteers to help in case of questions. Many times, a single child would get a machine of their own, but we also did groups of 2-3. Sometimes we would spend some time briefing the volunteers, but this was pretty minimal. <p> <h2>Logo</h2> This event was a 2h workshop and took place in November 2012. I was the main facilitator for it, and had previously written a logo interpreter <a href="http://www.burakemir.ch/arrowlogo/ArrowLogo.html">ArrowLogo</a> in my 20% time (*). I have also used it to discuss programming in a 30 minute session with a group of school children visiting the office in January 2015. <div class="separator" style="clear: both; text-align: center;"><a href="http://3.bp.blogspot.com/-APfDPFc8XsM/VRfTDPw-cvI/AAAAAAAAJqk/bi6lVJsDu7w/s1600/Screen%2BShot%2B2015-03-29%2Bat%2B12.24.36%2BPM.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" src="http://3.bp.blogspot.com/-APfDPFc8XsM/VRfTDPw-cvI/AAAAAAAAJqk/bi6lVJsDu7w/s320/Screen%2BShot%2B2015-03-29%2Bat%2B12.24.36%2BPM.png" /></a></div><p style="font-size: smaller; font-color:lightgray">(*) I used to code in logo when I was a kid myself. There are other, Java-based Logo implementations available e.g. <a href="http://xlogo.tuxfamily.net">xlogo</a>, but note that using these would require installation of a desktop app, or enabling Java plugin in a web browser. Installing desktop apps is often not an option (e.g. Chromebooks won't allow this), and enabling the Java plugins often turned out to be a security hazard in the past. </p> <a href="http://en.wikipedia.org/wiki/Logo_(programming_language)">Logo</a> was invented by <a href="http://en.wikipedia.org/wiki/Seymour_Papert">Seymour Papert</a> and others in 1967. The design of Logo is deeply rooted in <b>constructivist learning</b> and his Papert's book "Mindstorms: Children, Computers, and Powerful Ideas" is a very accessible resource that explains well how its creator viewed it as a tool not to teach programming but to approach education in general. <p>The summary is this: children are <b>exploring</b> the environment, essentially teaching themselves. The "teaching yourself" part will resonate with every programmer. LOGO achieves this by providing a facility for turtle graphics: kids need to instruct the turtle where to move in order to create paintings that way. This is simple procedural (imperative) programming. <p>For practical purposes, in a short workshop one will usually want to provide exercises to give some structure to the session; I was able to use <a href="http://www.abz.inf.ethz.ch/primarschulen-stufe-sek-1/unterrichtsmaterialien/">LOGO teaching resources from ABZ</a>, a group at ETH Zurich that is promoting informatics education for young kids. For beginners, it is very easy to come up with turtle graphics exercices: you can draw triangles, boxes, houses and move on to circles, flowers, filled box, colored paintings. <p>Something notable about Logo is that the childrens' activity is very close to what programmers do: manipulate source code in textual representation, run, even "debug". It also means that some basic typing skills and being able to navigate a very simple graphical user interfaces with a mouse are requirements. <p> Learning the ArrowLogo environment is very easy because it is very basic: there is a graphics panel, some place to write the code, and some commands to control what is happening, and that is it. The turtle graphics provides immediate feedback, and it is very easy to start over, giving children motivation to sort things out by themselves. After a short lead time, children seemed to be able to use the environment fine. <p>On the other hand, there are no fancy images or sounds, and building interactive things or games was out of scope. Also, ArrowLogo does not provide a way to save programs or take them home; since most programs written in this short time where basic and easy to reproduce, this did not seem to be a shortcoming. <h2>Scratch</h2> This even took place in March 2015, the audience was young girls (ages 10-12) and each of them had their own chromebook. They accessed <a href="scratch.mit.edu">scratch.mit.edu</a> with profiles that were created ahead of time (some confusion arose since some tried to treat their scratch profile intro as a (Google-) account). <p>Scratch is developed by the MIT media lab, which is a successor to the group that created Logo. Scratch offers a visual programming editor: kids can create and customize actors and control them through scripts. The scripts are not coded as textual source code, but by combining blocks by using drag-and-drop. <div class="separator" style="clear: both; text-align: center;"><a href="http://1.bp.blogspot.com/-tzGP7N_3o-Q/VRfTj3VS7eI/AAAAAAAAJqs/whIX6NLGcG0/s1600/Screen%2BShot%2B2015-03-29%2Bat%2B12.26.56%2BPM.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" src="http://1.bp.blogspot.com/-tzGP7N_3o-Q/VRfTj3VS7eI/AAAAAAAAJqs/whIX6NLGcG0/s320/Screen%2BShot%2B2015-03-29%2Bat%2B12.26.56%2BPM.png" /></a></div><p>The user interface is complex. There is almost no typing needed, except for entering values; however a good command of a mouse and a good understanding of graphical user interface is essential and this was clearly a problem for some of the participants. <p>Scratch is fascinating for kids, because it enables them to do game or animation development. However, it is also a moderately complex environment to learn, and if one is not ready to deal with this complexity, then the insights into programming that children can get are quite limited. <p>I showed kids how they could add keyboard control for their actors or automate their movement, but I feel that at least with some part of the childen, there was no understanding for what is happening and why, and no engagement. With such a complex environment, it is easy to fall back into an "instructionist" pattern and tell kids every single step they need to take and ask them to mechanically repeat it. Alternatively, one can accept that some of the participants will be distracted by looking at different shapes, being obsessed with drawing things or playing sounds. <p>Kids could save their work, take their profile details home and continue. Though many said they would like to continue, I have some doubts. <h2>Agent Cubes</h2> <a href="http://www.fhnw.ch/people/alexander-repenning/profile">Alexander Repenning</a> visited Google to run an "Hour of Code" workshop in May 2014, using his software <a href="https://www.agentcubesonline.com">Agent Cubes Online</a>. <p>Just as in scratch, the user can add agents and define behavior for them using rules. The focus is clearly on game design, and it is really easy to clone agents (e.g. think of the many trucks in the frogger game, all running the same code). The user interface for Agent Cubes Online however is even harder to get used to than scratch. There are too many buttons, most of them are too small. Like in scratch, there is a multitude of things that can be done, like draw your own agents. <p>The 'hour of code' was very structured: small demonstrations by the teacher would alternate with periods were kids were free to replicate the steps. Clearly, we have left any constructivist pretensions behind since the purpose is game design at all cost. <p>I think the level of engagement was clearly there, but it is hard to tell whether children would be able to "make it their own" later. Certainly, some kids enjoyed playing with the thing, but I don't think the majority of them thought of themselves as programmers or would return to game development. <p> <div class="separator" style="clear: both; text-align: center;"><a href="http://3.bp.blogspot.com/-k5TYbdXdHKs/VRr4ttrb1-I/AAAAAAAAJs8/couAPqnSLUU/s1600/Screen%2BShot%2B2015-03-31%2Bat%2B9.42.11%2BPM.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" src="http://3.bp.blogspot.com/-k5TYbdXdHKs/VRr4ttrb1-I/AAAAAAAAJs8/couAPqnSLUU/s320/Screen%2BShot%2B2015-03-31%2Bat%2B9.42.11%2BPM.png" /></a></div> Kids were able to save their work to their profile and could continue their work later. This required registering with their email address at the end of the workshop. <p><h3>Conclusion: use code.org</h3> My personal "conclusions" from all these were the following: <ul> <li>for kids, visual programming can be quite helpful, because it is appealing and easier to use. I am looking forward to seeing such a session with a touch screen interfaces</li> <li>there seems to be a wide spectrum between turtle graphics and game development. "Flashiness" seems to be a factor in getting some kids engaged; or as the game development motto goes "[audiovisual] content is king" <li>ArrowLogo is seriously lacking facilities for developing interactive things or animations. </ul><p>While doing some research, I found there is a more convincing solution in this design space. The Google-developed <a href="https://developers.google.com/blockly/">blockly</a> library for visual programming has been used at the <a href="http://code.org/learn">code.org Hour of Code</a> to create an introductory experience that easily beats all three environments that I described above; I mean: you can code turtle graphics by controlling a Disney(tm) character using visual programming blocks, and the character makes realistic ice-skating sounds as it moves! <p>I think if we take a step back, we will see that nothing can be concluded: it seems that the quest for the "right" answer will continue. I hope I have at least been able to shed some light on factors that contibute to successful intro-to-programming event for kids. Burak Emirhttp://www.blogger.com/profile/01651698653168460072noreply@blogger.com0tag:blogger.com,1999:blog-28320851.post-10921171403252516452015-03-28T02:36:00.000-07:002015-03-28T02:42:53.961-07:00angular2 dart nested componentsThis is a quick post to share how to do nested components in angular2 dart. It is slightly expanded version of this <a href="https://stackoverflow.com/questions/29121678/angular-2-component-inside-main-component/29125430">stackoverflow answer</a>. We start from the <a href="https://angular.io/docs/dart/latest/quickstart.html">quickstart tutorial</a> as follows: The <code>index.html</code> is unchanged: <pre><br /><html><br /> <head><br /> <title>My App</title><br /> </head><br /> <body><br /> <my-app></my-app><br /> <script type="application/dart" src="main.dart"></script><br /> <script src="packages/browser/dart.js"></script><br /> </body><br /></html><br /></pre> In our <code>main.dart</code> file, we can reference components inside our templates as along as we also reference them in the directives field: <pre><br />import 'package:angular2/angular2.dart';<br /><br />// These imports will go away soon:<br />import 'package:angular2/src/reflection/reflection.dart' show reflector;<br />import 'package:angular2/src/reflection/reflection_capabilities.dart' show ReflectionCapabilities;<br /><br />@Component(selector: 'graphics-panel')<br />@Template(inline: '<b>I am the graphics panel</b>')<br />class <b>GraphicsPanel</b> {}<br /><br />@Component(selector: 'input-panel')<br />@Template(inline: '<i>I am the input panel</i>')<br />class <b>InputPanel</b> {}<br /><br />@Component(selector: 'arrow-logo-app')<br />@Template(<br /> inline: '''<br /> <div><h1>Hello {{ name }}</h1> <br /> <graphics-panel></graphics-panel><br /> <input-panel></input-panel></div><br />''',<br /> <b>directives: const [GraphicsPanel, InputPanel]</b><br />)<br />class <b>AppComponent</b> {<br /> String name = 'Bob';<br />}<br /><br />main() {<br /> // Temporarily needed.<br /> reflector.reflectionCapabilities = new ReflectionCapabilities();<br /> <br /> bootstrap(AppComponent);<br />}<br /></pre>Burak Emirhttp://www.blogger.com/profile/01651698653168460072noreply@blogger.com0tag:blogger.com,1999:blog-28320851.post-84355894048263186532014-06-15T12:44:00.000-07:002014-06-15T13:13:23.737-07:00cflags weirdness in gypI had posted a few hints for gyp and ninja usage a while ago. I still think this combination makes a great build system, but there are also a few weird things to mention. Found the solution on <a href="https://gist.github.com/TooTallNate/1590684">this gist thread</a> and <a href="http://stackoverflow.com/questions/15171270/using-c11-with-gyp-project">this question and answer on stackoverflow</a>: if you want to specify c compiler flags or c++ compiler flags, you would to this via "xcode_settings". No comment. <pre><br />{<br /> 'make_global_settings': [<br /> ['CXX', '/usr/bin/clang++' ],<br /> # If you don't want to do this hack ...<br /> # ['CXX', '/usr/bin/clang++ -std=c++11' ],<br /> ],<br /> 'targets': [<br /> {<br /> 'target_name': 'shc',<br /> 'type': 'executable',<br /> 'sources': [<br /> # ... omitted<br /> ],<br /> # ... you would do this. It will work with the ninja generator.<br /> 'xcode_settings': {<br /> 'OTHER_CPLUSPLUSFLAGS': [<br /> '-std=c++11'<br /> ]<br /> }<br /> },<br /> ],<br />}<br /></pre>Burak Emirhttp://www.blogger.com/profile/01651698653168460072noreply@blogger.com0