tag:blogger.com,1999:blog-283208512021-05-03T13:02:51.944-07:00Uncut blocks of woodBurak Emirhttp://www.blogger.com/profile/01651698653168460072noreply@blogger.comBlogger43125tag:blogger.com,1999:blog-28320851.post-78880108724901629842021-01-23T02:25:00.000-08:002021-01-23T02:25:04.243-08:00What are Commuting Conversions?The term "commuting conversions" in proof theory refers to a proof transformation of natural deduction proofs that involve sums (disjunction and existential formulae). The motivation for these comes from a desirable equivalence relation among natural deduction proofs that is lost when $\vee$ and $\exists$ are added. \(\newcommand{\orelim}[0]{\color{blue}{(\vee{\mathrm{\small ELIM}})}}\) Consider the following natural deduction rules $$ \cfrac{\begin{array}[cc]{} \\ {} \\ A \vee B \end{array} \quad \begin{array}[cc]{} [A] \\ ~~\vdots \\ \phantom{[}C\phantom{]} \end{array}\quad \begin{array}[cc]{} [B] \\ ~~\vdots \\ \phantom{[}C\phantom{]} \end{array} } {C}~\color{blue}{(\vee{\mathrm{\small ELIM}})} $$ $$ \cfrac{\begin{array}[cc]{} \\ {} \\ \exists x . (x)\end{array} \quad \begin{array}[c]{} [P(t)] \\ ~~\vdots \\ \phantom{[}C\phantom{]} \end{array}} {C}~\color{blue}{(\exists{\mathrm{\small ELIM}})} $$ When the formula $C$ is not the end of the proof, but there is another formula $W$ that is derived, then we get <em>multiple</em> proofs to derive the same thing. These multiple proofs of the same thing are all <m>normal</e> in the sense that there are no "detours" in the form of implication introduction and elimination. If we want a normal form to be <em>unique</em>, then we must necessarily pick one among these possible proof and designate it as "normal." This happens by adding additional conversions. $$ \cfrac{\begin{array}[cc]{} \\ {} \\ A \vee B \end{array} \quad \begin{array}[cc]{} [A] \\ ~~\vdots \\ \phantom{[}C\phantom{]} \end{array}\quad \begin{array}[cc]{} [B] \\ ~~\vdots \\ \phantom{[}C\phantom{]} \end{array} } {\cfrac{~C~}{~W~}} ~ \rightsquigarrow ~ % \cfrac{\begin{array}[cc]{} \\ {} \\ A \vee B \end{array} \quad \begin{array}[cc]{} [A] \\ ~~\vdots \\ \phantom{[}C\phantom{]} \\ \hline \phantom{[}W\phantom{]} \end{array}\quad \begin{array}[cc]{} [B] \\ ~~\vdots \\ \phantom{[}C\phantom{]} \\ \hline \phantom{[}W\phantom{]} \end{array} } {W}$$ $$ \cfrac{\begin{array}[cc] {} & [P(t)] \\ {} & \vdots \\ \exists x . P(x) & C \\ \end{array}}{\cfrac{~C~}{~W~}} ~ \rightsquigarrow ~ \cfrac{\begin{array}[cc] {} & [P(t)] \\ {} & \vdots \\ {} & C \\ \exists x . P(x) & \overline{~W~} \\ \end{array}}{W} $$ <h2>Pattern matching example</h2> Programmers may appreciate the following presentation of the same thing in Scala-like syntax. This: <blockquote> $\color{blue}{W}$($r$ match { case Left(a) => $\color{blue}{s}$; case Right(b) => $\color{blue}{t}$ }) </blockquote> is the same as this: <blockquote> $r$ match { case Left(a) => $\color{blue}{W(s)}$; case Right(b) => $\color{blue}{W(t)}$ }) </blockquote>Burak Emirhttp://www.blogger.com/profile/01651698653168460072noreply@blogger.com0tag:blogger.com,1999:blog-28320851.post-32366108710165056152020-12-26T02:59:00.005-08:002020-12-26T12:02:43.349-08:00Kripke semantics and Tableaux for Intuitionistic Logic<p>Simply-typed $\lambda$ calculus terms can be seen as encoding of <a href="https://blog.burakemir.ch/2020/05/intuitionistic-propositional-logic-and.html">natural deduction proofs of intuitionistic propositional logic</a> (IPC - the C is for calculus; IPL is also common). This is the Curry Howard correspondence. <p>So we can prove an IPC formula by finding a suitable term of simply-typed $\lambda$ calculus. Sounds great - but there are other ways to represent proofs and I wanted to understand better how structure of these natural deduction proofs relates to the structure of those other proofs. <!-- \(\newcommand{\qfalse}[0]{\color{blue} {\mathit{\#f}}}\) -->\(\newcommand{\qfalse}[0]{\color{blue} {\bot}}\) \(\newcommand{\qp}[0]{\color{blue} p}\) \(\newcommand{\qq}[0]{\color{blue} q}\) \(\newcommand{\qX}[0]{\color{blue} X}\) \(\newcommand{\qY}[0]{\color{blue} Y}\) \(\newcommand{\qZ}[0]{\color{blue} Z}\) \(\newcommand{\qand}[0]{\color{blue} \wedge}\) \(\newcommand{\qor}[0]{\color{blue} \vee}\) \(\newcommand{\qimplies}[0]{\rightarrow}\) \(\newcommand{\qneg}[1]{{\neg #1}}\) \(\newcommand{\andintro}[0]{\color{blue}{(\wedge{\mathrm{\small INTRO}})}}\) \(\newcommand{\andelim}[0]{\color{blue}{(\wedge{\mathrm{\small ELIM}})}}\) \(\newcommand{\andelimleft}[0]{\color{blue}{(\wedge{\mathrm{\small ELIM/1}})}}\) \(\newcommand{\andelimright}[0]{\color{blue}{(\wedge{\mathrm{\small ELIM/2}})}}\) \(\newcommand{\orintro}[0]{\color{blue}{(\vee{\mathrm{\small INTRO}})}}\) \(\newcommand{\orelim}[0]{\color{blue}{(\vee{\mathrm{\small ELIM}})}}\) \(\newcommand{\implintro}[0]{\color{blue}{(\rightarrow{\mathrm{\small INTRO}})}}\) \(\newcommand{\implelim}[0]{\color{blue}{(\rightarrow{\mathrm{\small ELIM}})}}\) \(\newcommand{\notelim}[0]{\color{blue}{(\neg{\mathrm{\small ELIM}})}}\) \(\newcommand{\botelim}[0]{\color{blue}{(\bot{\mathrm{\small ELIM}})}}\) <p>As an example, consider the formula $\qneg{\qneg{(\qp \qor \qneg{\qp)}}}$. We can take $\qneg{\qX}$ as abbreviation for $\qX \qimplies \bot$. Is it "obvious" how to find a term of type $((\qp \qor \qneg{\qp}) \qimplies \bot) \qimplies \bot$? Here is a natural deduction proof of (the names are from this list of <a href="https://blog.burakemir.ch/2020/05/intuitionistic-propositional-logic-and.html">intuitionistic natural deduction rules</a> from the post above; I can't be bothered with typesetting a tree). $$\begin{array}{lll} 1. & \qneg{(\qp \qor \qneg{\qp)}} & \mathrm{(assumption)} \\ 2. & \qp & \mathrm{(assumption)} \\ 3. & \qp \qor \qneg{\qp} & \mathrm{(via}~\orintro/1~\mathrm{from~2.}\\ 4. & \bot & \mathrm{(via}~\implelim~\mathrm{apply~1.~to~3.)} \\ 5. & \qneg{\qp} & \mathrm{(via}~\implintro~\mathrm{from~[2.]~to~4.)} \\ 6. & \qp \qor \qneg{\qp} & \mathrm{(via}~\orintro/2~\mathrm{from~5.)} \\ 7. & \bot & \mathrm{(via}~\implelim~\mathrm{apply~1.~to~6.)} \\ 8. & \qneg{\qneg{(\qp \qor \qneg{\qp)}}}& \mathrm{(via}~\implintro~\mathrm{from~[1.]~to~7.)} \\ \end{array} $$ <p>Maybe a better question to ask is: Why does this proof looks the way it looks? How, starting from the formula, can we arrive at such a proof, with steps that are so simple that we can leave them to a machine? <h1>Intuitionistic Tableaux</h1> <p>Kripke semantics and tableaux give us an alternative view of proving IPC formulae. The tableau calculus we discuss here is not very different from sequent calculus, it uses "block tableaux" where each node is set of (signed) formulae. It will be a refutation calculus - despite proving intuitionistic logic! <p>The definitions here follow Melvin Fitting's book [1] and Smullyan's unified presentation [2]. <p>In contrast to block tableaux, "analytic tableaux" have one formula per node and we consider the set of all formulas on a branch instead of looking at a node). These and connexion methods are out of scope today, just like modal logic or the algebraic view (denotational semantics). The only differences from the aforementioned texts is that instead of using negation as primitive, I will use $\qfalse$ as primitive, and I make it explicit when a principal $T \qX$ formulae has to be preserved in tableau extension. <p>[1] Melvin Fitting. Intuitionistic Logic, Model Theory and Forcing [<a href="https://www.researchgate.net/publication/243704264_Intuitionistic_Logic_Model_Theory_and_Forcing">researchgate pdf</a>]. <p>[2] Raymond M. Smullyan. A generalization of intuitionistic and modal logics. https://doi.org/10.1016/S0049-237X(08)71543-X <h3>Syntax for Intuitionistic and Minimal Propositional Logic</h3> <p>Once more, we define of well-formed formulae (wff): <ul><li>a propositional variable $\qp$ is a wff</li><li>$\qfalse$ is a wff</li><li>if $\qX$ and $\qY$ are wff, then $\qX \wedge \qY$ is a wff. <li>if $\qX$ and $\qY$ are wff, then $\qX \vee \qY$ is a wff. <li>if $\qX$ is a wff, then $\qX \qimplies \qY$ is a wff </ul> <p>We use $\qq,\qp$ for propositional variables and $\qX, \qY, \qZ$ as metavariables for formulae. The symbol $\qfalse$ stands for "falsehood", "falsity" or "absurd" and is also written as $\bot$. We use $\qfalse$ and $\qimplies$ as primitive (built-in) symbols, and define negation $\qneg{X}$ as an abbreviation for $\qX \qimplies \qfalse$. This goes well with the intuitionist notion that "a proof of $\qneg{X}$ amounts to a construction that transforms a proof $\qX$ to a proof of falsehood." <!-- <p>Programmers who have passing familiarity with the Curry-Howard-Lambek correspondence are going to be familiar with $\qfalse$ (a type without values, pretty much like <code>void</code>; an initial object in a bi-CCC); we will see how the handling of $\qfalse$ is the main difference between intuitionistic and minimal logic. --> <h3>Kripke semantics</h3> <p>A (Kripke) model for <i>intuitionistic</i> propositional logic is a triple $\langle \mathcal{G}, \mathcal{R}, \Vdash \rangle$ where $\mathcal{G}$ is a set of <i>states</i>, $\mathcal{R}$ an accessibility relation between states that is reflexive and transitive, and a relation $\Vdash$ between states and formulae that satisfies conditions $K_0 - K_3$ below. We use $\Gamma, \Delta$ for elements of $\mathcal{G}$, read $\Gamma \Vdash \qX$ as "$\Gamma$ forces $\qX$" and write $\Gamma^\star$ for "any $\Delta$ with $\Gamma \mathcal{R} \Delta$". <p>($K_0$) If $\Gamma \Vdash \qX$ then $\Gamma^\star \Vdash \qX$ <p>($K_1$) $\Gamma \Vdash \qX \qand \qY$ iff $\Gamma \Vdash \qX$ and $\Gamma \Vdash \qY$ <p>($K_2$) $\Gamma \Vdash \qX \qor \qY$ iff $\Gamma \Vdash \qX$ or $\Gamma \Vdash \qY$ <p>($K_3$) $\Gamma \Vdash \qX \qimplies \qY$ iff for every $\Gamma^\star$, either $\Gamma^\star \not\Vdash \qX$ or $\Gamma^\star \Vdash \qY$. <p>($K_4$) $\Gamma \not\Vdash \qfalse$. <p>A formula is <i>valid</i> in a model if $\Gamma \Vdash \qX$ for all $\Gamma \in \mathcal{G}$. A formula is <i>valid</i> if it is valid in all models. Note that with $K_0$ in mind, we could also have written conditions $K_1$ and $K_2$ differently: <p>($K_1'$) $\Gamma \Vdash \qX \qand \qY$ iff for every $\Gamma^\star$, $\Gamma^\star \Vdash \qX$ and $\Gamma^\star \Vdash \qY$ <p>($K_2'$) $\Gamma \Vdash \qX \qor \qY$ iff for every $\Gamma^\star$, $\Gamma^\star \Vdash \qX$ or $\Gamma^\star \Vdash \qY$ <p>In the above definition, it is understood that $\Vdash$ is "seeded" with a relation $\Vdash^{0}$ between states and propositional variables $\qp$. This $\Vdash^{0}$ is then extended to a relation between states and whole formulae. It is also common to define states directly as subsets of the "available" propositional variables (the ones forced in that state). Here is an example, with two states $\Gamma = \{\qq\}$ and $\Delta = \{\qp, \qq\}$: <div class="separator" style="clear: both; text-align: center;"><a href="https://1.bp.blogspot.com/-i2UIGY6VmmA/Xp2ZZ9mFVII/AAAAAAAAcyU/u0e5UkTkl2AaV2mDIo8rlimyMSqhFbkawCNcBGAsYHQ/s1600/kripke_1.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" src="https://1.bp.blogspot.com/-i2UIGY6VmmA/Xp2ZZ9mFVII/AAAAAAAAcyU/u0e5UkTkl2AaV2mDIo8rlimyMSqhFbkawCNcBGAsYHQ/s320/kripke_1.png" width="320" height="98" data-original-width="1080" data-original-height="332" /></a></div> <p>The relation $\Vdash$ takes all reachable states into consideration, and once a state forces $\Gamma \Vdash \qq$, any subsequent $\Gamma^\star$ must force it, too. The <i>"truth in a state"</i> is rather more dynamic than absolute "truth" we are used to from classical logic. One can give this a temporal epistemic interpretation and say states represent subjective <i>knowledge</i> of true facts, and that this knowledge "grows over time" (as we access subsequent states). <p>As an exercise: Convince yourself that $\Delta \Vdash \qp \qand \qq$, $\Gamma \not\Vdash \qp$ and the formula $\qp \qimplies \qq$ is valid in this model. Note further that $\qp \vee \qneg{p}$ is not valid. We have $\Gamma \not\Vdash \qp$, which is very different from saying $\Gamma \Vdash \qneg{p}$. <!--To see this, let us take a single propositional variable $p$ as our formula, and a little model $M$ based on states $\mathcal{G} = \{ \{\}, \{\qp\}\}$ with an accessibility relation that includes $\{\}\mathcal{R}\{\qp\}$ but not the other direction, and a "seed" relation $\Gamma \Vdash^0 p$ iff $p \in \Gamma$, properly extended to $\Vdash$. Now suppose $\{\} \Vdash \qp \qor \qneg{p}$:<ul> <li>Then either $\{\} \vdash \qp$ or $\{\} \vdash \qneg{p}$.<ul> <li>Try $\{\} \Vdash \qp$? However, $\{\} \not\Vdash^0 \qp$. <li>Try $\{\} \Vdash \qneg{p}$ instead? But then $\{\} \mathcal{R} \{\qp\}$ with $\{\qp\} \Vdash^0 \qp$ and certainly $\{\qp\} \not\Vdash \qfalse$. </ul></ul>--> <p>[When Kripke models are used for modal logics, people consider accessibility relation with different properties, leading to different modal logics; there is an explicit symbol $\square$ to talk about accessibility, and our reflexive and (in particular) transitive $\mathcal{R}$ corresponds to S4 modal logic, with the S4 axiom $\square \qp \rightarrow \square\square \qp$. We will just mention here that intuitionistic logic can be <i>interpreted</i> in S4 modal logic.] <!-- <p>The property for $\Gamma \Vdash \qX \qimplies \qY$ reminds us of application in simply-typed $\lambda$-calculus, which combines $\qX \qimplies \qY$ and $\qX$ into a $\qY$. Recall that in the correspondence, $\lambda$ calculus terms are an encoding of <i>proofs</i>; we haven't seen any proofs yet. -->\(\newcommand{\qT}[0]{T}\) \(\newcommand{\qF}[0]{F}\) \(\newcommand{\qS}[0]{\mathrm{S}}\) <h3>Kripke semantics for Signed Formulae</h3> On towards a proof system for IPC based on an intuitionistic tableau calculus (or just <i>intuitionistic tableaux</i>). To this end, we introduce <i>signed formulae</i>, using symbols $T$ and $F$ and writing $T\qX, F\qX$ for a signed version of formula $X$: <li>$\Gamma \vdash T\qX$ means $\Gamma \Vdash \qX$ and <li>$\Gamma \vdash F\qX$ means $\Gamma \not\Vdash \qX$. </ul> <p>In classical logic, $T\qX$ would mean $\qX$ is true and $F\qX$ would mean $\qX$ is false (or $\qneg{X}$ is true). The way to understand $\Gamma \vdash F\qX$ non-classically is to read it as "it is <b>not known</b> that $\Gamma \Vdash \qX$." Note that $\Gamma$ is merely a state, not a context or set of formulae - we are using this assertion as a building block to compare proof situations. For example, the assertions $T\qX \qand \qY$ and $F\qX$ cannot both be true at the same time (or in the same state). <p>We follow Smullyan's uniform notation, and classify our formulae into different types (whether they are "conjunctive" or "disjunctive" in nature):<ul><li> type $\alpha$ is any signed formula of the shape $T\qX \qand \qY, F\qX \qor \qY, F\qX \qimplies \qY, F\qfalse$ <li> type $\beta$ is any formula $F\qX \qand \qY, T\qX \qor \qY, T\qX \qimplies \qY, T\qfalse$. </ul> <p>Since we added $\qfalse$ as primitive, we need to say something about its components, too. We treat $\qfalse$ like we treat an empty disjunction.</p> <p>For a signed formula $\alpha (\beta)$ we called certain signed subformula its <i>components</i> $\alpha_1$ and $\alpha_2$ (same for $\beta$), according to this table: $$ \begin{array}{c|c|c}% \alpha & \alpha_1 & \alpha_2 \\ \hline T\qX \qand \qY & T\qX & T\qY \\ \hline F\qX \qor \qY & F\qX & F\qY \\ \hline F\qX \qimplies \qY & T\qX & F\qY \\ \hline F \qfalse & F \qfalse & F \qfalse \end{array} \quad \quad \begin{array}{c|c|c}% \beta & \beta_1 & \beta_2 \\ \hline F\qX \qand \qY & F\qX & F\qY \\ \hline T\qX \qor \qY & T\qX & T\qY \\ \hline T\qX \qimplies \qY & F\qX & T\qY \\ \hline T \qfalse & T \qfalse & T \qfalse \end{array} $$ <p>Again, for $\qfalse$ we have subformulae that are not proper, but this is not going to be a concern.</p> <p>Let us call the <i>conjugate</i> of $\overline{T\qX}$ the corresponding $F\qX$ and vice versa. Note that the conjugate of a formula of type $\alpha$ is of type $\beta$ and vice versa. In classical logic, a set of signed formulae $\{T\qX_1,\ldots,T\qX_n, F\qY_1, \ldots, \qY_m\}$ would correspond to a sequent calculus sequent $\qX_1, \ldots \qX_n \longrightarrow \qY_1, \ldots, \qY_m$. The sequent would be provable iff the set of signed formulae is unsatisfiable. The distinction into type $\alpha$ and $\beta$ explain how a formula can either be split into its components for conjunctive formulae $\alpha$ or the proof needs to branch into two possibilities for disjunctive formulae $\beta$. <p>For intuitionistic logic, we are going to call all formulae $T \qX$ <i>permanent</i> and all formulae $F \qX$ <i>co-permanent</i>. With all this setup, we can remix our definition of Kripke model and $\Vdash$ into the following conditions using <b>signed</b> formulae $\qX$: <p>$(U_0(a))$ Either $\Gamma \Vdash \qX$ or $\Gamma \Vdash \overline{\qX}$, but not both. <p>$(U_0(b))$ If $\Gamma \Vdash \qX$ and $\qX$ is <i>permanent</i>, then $\Gamma^\star \Vdash \qX$. <p>$(U_1)$ For any permanent $\alpha$, $\Gamma \Vdash \alpha$ iff for every $\Gamma^\star$, $\Gamma^\star \Vdash \alpha_1$ and $\Gamma^\star \Vdash \alpha_2$ <p>$(U_2)$ For any permanent $\beta$, $\Gamma \Vdash \beta$ iff for every $\Gamma^\star$, $\Gamma^\star \Vdash \beta_1$ or $\Gamma^\star \Vdash \beta_2$ <p>The behavior on co-permanent $\alpha$ and $\beta$ is determined by $U_1$ and $U_2$ too, by taking into consideration $(U_0(b))$. Observe: <p>$(U_1')$ For any co-permanent $\alpha$, $\Gamma \Vdash \alpha$ iff for some $\Gamma^\star$, $\Gamma^\star \Vdash \alpha_1$ and $\Gamma^\star \Vdash \alpha_2$ <p>$(U_2')$ For any co-permanent $\beta$, $\Gamma \Vdash \beta$ iff for some $\Gamma^\star$, $\Gamma^\star \Vdash \beta_1$ or $\Gamma^\star \Vdash \beta_2$ <p>As an exercise: Go back to the example above, and convince yourself that $\Delta \Vdash T\qp \qand \qq$ and $\Gamma \Vdash F\qp$. Check that the formula $T \qp \qimplies \qq$ is valid in every state, using the conditions $U_\star$ above. Now do the same for $T \qp \vee \qneg{p}$, remembering that it stands for $T \qp \vee (\qp \qimplies \qfalse)$. Now, to keep it interesting, what about $\Gamma \Vdash F \qp \vee \qneg{p}$? Is $F \qp \vee \qneg{p}$ valid? <p>A final definition is to call a formula <i>special</i> if it is co-permanent but has a permanent component, otherwise it is called <i>ordinary</i>. The only special formula we have is $F\qX \qimplies \qY$, of type $\alpha$. If we had negation as a built-in operator, that would be a second special $\alpha$. Check out Smullyan's paper for predicate logic and other generalizations that require special $\beta$. <!-- <h3>Proofs in Sequent Calculus</h3> We treat a set $\qS$ of signed formulae $\{T\qX_1,\ldots,T\qX_n, F\qY_1, \ldots, \qY_m\}$ as a sequent $\qX_1,\ldots,\qX_n \vdash \qY_1,\ldots,\qY_m$ and give rules of sequent calculus in uniform notation. In sequent calculus, the root is at the bottom and the proof tree grows upwards, with axioms (leafs) at the top. Note that while Gentzen in his famous paper defines an intuitionistic sequent calculus LJ by restricting the number of formulae on the right to at most one, we are instead defining a multiple-succedent intuitionistic sequent calculus. $\vert~S~\vert$ stands for the sequent of the set of signed formulae $S$. <p>Axioms: $\large \vert \qS, \qX, \overline{\qX}\vert \quad \vert S, \qT \qfalse ~\vert$ <p>Inference rules: $$ \begin{array}{ll} (A1) & \mathrm{for~ordinary} ~ \alpha: ~ \LARGE \frac{\vert \qS,~ \alpha_1,~ \alpha_2\vert }{\vert \qS,~ \alpha\vert } \\ (A2) & \mathrm{for~special} ~ \alpha: ~ \LARGE \frac{\vert \qS_P,~ \alpha_1,~ \alpha_2\vert }{\vert \qS,~ \alpha\vert } \\ (B1) & \mathrm{for~ordinary} ~ \beta: ~ \LARGE \frac{\vert \qS,~ \beta_1 \vert}{\vert \qS,~ \beta \vert } \quad \frac{\vert \qS,~ \beta_2\vert }{\vert \qS,~ \beta\vert } \\ \end{array} $$ <p>There is a subtlety here, in that we are using sets and thus blur the fact that a permanent formula below the line <b>must</b> to be repeated above the line (e.g. $S, \beta$, consider $\beta \in S$ if $\beta$ is permanent. For permanent formulae $\qT\qX$, this corresponds to contraction being available on the left in Gentzen's intuitionistic sequent calculus LJ. --> <h3>Tableau Proofs</h3> <p>A tableau proof is like a sequent calculus proof written upside-down: we draw it with the root at the top, growing towards the bottom. Instead of $S \cup \qX$, we write $S, \qX$. A vertical bar $\vert$ indicates that there are two children. $\qS_T$ means $\{ \qT\qX \vert \qT\qX \in \qS\}$, and $\alpha_P$ means repeat $\alpha$ if it is permanent (same for $\beta$). This is just to emphasize that $T \qX$ (possibly) needs be to preserved; in sequent calculus, this would surface as a structural operation. $$ \begin{array}{ll} (A1) & \mathrm{for~ordinary} ~ \alpha: ~ \LARGE \frac{\qS,~ \alpha }{\qS,~\alpha_P,~ \alpha_1,~ \alpha_2 } \\ (A2) & \mathrm{for~special} ~ \alpha: ~ \LARGE \frac{\qS,~ \alpha }{\qS_P,~ \alpha_1,~ \alpha_2 } \\ (B1) & \mathrm{for~ordinary} ~ \beta: ~ \LARGE \frac{\qS,~ \beta }{\qS,~\beta_P, ~ \beta_1 ~ \vert ~ \qS,~\beta_P,~\beta_2 } \\ \end{array} $$ <!-- $$\frac{\qS, \qT\qX \qand \qY}{\qS, \qT \qX, \qT \qY} \quad \frac{\qS, \qF\qX \qand \qY}{\qS, \qF \qX ~\vert~ \qS, \qF \qY} $$ $$\frac{\qS, \qT\qX \qor \qY}{\qS, \qT \qX ~\vert~ S, \qT \qY} \quad \frac{\qS, \qF\qX \qor \qY}{\qS, \qF \qX, \qF \qY} $$ $$\frac{\qS, \qT\qX \qimplies \qY}{\qS, \qF \qX ~\vert~ S, \qT \qY} \quad \frac{\qS, \qF\qX \qimplies \qY}{\qS_T, \qT \qX, \qF \qY} $$ $$\frac{\qS, \qF\qfalse}{?} $$ --> <p>Now, a <i>configuration</i> is a collection $[~\qS_1, \qS_2, \ldots, \qS_n~]$ of sets of signed formulae. A rule takes a configuration and replaces one $S_i$ with its result (or its two results). A <i>tableau</i> is a finite sequence of configurations $C_1, \ldots C_n$ in which each configuration except the first resulted from its predecessor by application of a rule. <p>A set of signed formulae is <i>closed</i> if it contains both $\qT \qX$ and $\qF \qX$ for some formula $\qX$ or if it contains $\qT \qfalse$. A configuration $[~\qS_1, \qS_2, \ldots, \qS_n~]$ is closed if <b>all</b> sets $\qS_i$ are closed. A tableau $C_1, \ldots C_n$ is closed if some $C_i$ is closed. <p>A tableau for a signed set of formula $\qS$ is a tableau $C_1, \ldots, C_n$ with $C_1 = \{\qS\}$. A finite set of of signed formulae $\qS$ is <i>inconsistent</i> if some tableau for $S$ is closed. Otherwise, $\qS$ is <i>consistent</i>. A formula $\qX$ is a <i>theorem</i>, written $\vdash_I \qX$, if $\{\qF \qX\}$ is inconsistent, and a closed tableau for $\{\qF\qX\}$ is called a proof of $\qX$. <p>What we have done here is to define provability of $X$ as inconsistency of $FX$. This refutation calculus may not be appealing to an intuitionist. <p>Let's look at a proof of $\qneg{\qneg{(\qX \vee \qneg{X})}}$ in our $\qfalse$-primitive logic. $$ \begin{array}{l} [~\{\qF \qneg{\qneg{(\qX \vee \qneg{\qX})})} ] \\ [~\{\qT \qneg{(\qX \vee \qneg{\qX}})\} ] \\ [~\{\qT \qneg{(\qX \vee \qneg{\qX}}), \qF \qX \vee \qneg{\qX}\} ] \\ [~\{\qT \qneg{(\qX \vee \qneg{\qX}}), \qF \qX, \qF \qneg{\qX}\} ] \\ [~\{\qT \qneg{(\qX \vee \qneg{\qX}}), \qT \qX\} ] \\ [~\{\qT \qneg{(\qX \vee \qneg{\qX}}), \qF \qX \vee (\qneg{\qX}), \qT \qX\} ] \\ [~\{\qT \qneg{(\qX \vee \qneg{\qX}}), \qF \qX, \qF \qneg{\qX}, \qT \qX\} ] \\ \end{array} $$ The same proof in a bit more verbose: $$ \begin{array}{l} [~\{\qF ((\qX \vee (\qX \qimplies \qfalse)) \qimplies \qfalse) \qimplies \qfalse\}~] \\ \alpha (F\qimplies) \\ [~\{\qT ((\qX \vee (\qX \qimplies \qfalse)) \qimplies \qfalse, \qF \bot)\}~] \\ \beta (T\qimplies) \\ [~\{\qT ((\qX \vee (\qX \qimplies \qfalse)) \qimplies \qfalse), \qF (\qX \vee (\qX \qimplies \qfalse)), \qF \qfalse\}~, \{\ldots, \qT \qfalse\}] \\ \alpha (F\qor)\\ [~\{\qT ((\qX \vee (\qX \qimplies \qfalse)) \qimplies \qfalse), \qF \qX, \qF (\qX \qimplies \qfalse), \qF \qfalse\}~, \{\ldots, \qT \qfalse\}] \\ \alpha (F\qimplies)\\ [~\{\qT ((\qX \vee (\qX \qimplies \qfalse)) \qimplies \qfalse), \qT \qX, \qF \qfalse\}, \{\ldots, \qT \qfalse\}] \\ \beta (T\qimplies)\\ [~\{\qT ((\qX \vee (\qX \qimplies \qfalse)) \qimplies \qfalse), \qF (\qX \vee (\qX \qimplies \qfalse)), \qT \qX, \qF \qfalse \}, \\ ~ \{\ldots, \qT \qfalse\}, \{\ldots, \qT \qfalse\}] \\ \alpha (F\qor)\\ [~\{\qT ((\qX \vee (\qX \qimplies \qfalse)) \qimplies \qfalse), \qF \qX, \qF (\qX \qimplies \qfalse), \qT \qX, \qF \qfalse \}, \\ ~ \{\ldots, \qT \qfalse\}, \{\ldots, \qT \qfalse\}] \\ \end{array} $$ <h4>What is convenient about this?</h4> <p>In applying the rules of the tableau calculus, we are taking the formula we want to prove apart and deal with its subformulae until we close a branch (configuration). One never needs to guess an intermediate formula or assumption. However, it is not an entirely deterministic procedure, in the sense that we still need to pick which of the formulas we want to take apart. Automated reasoning! And it should be easy to convert the result into a natural deduction proof (similar to Prawitz's view that a sequent calculus proof can be seen as a description of how to construct a natural deduction proof). <p>Another thing that is great about the tableaux method is that if after taking everything apart, you end up with open branches (that cannot be closed), you can read them as models that provide counter-examples. The application of a "special" rule indicates that we transitioned to another state. <h4>What is special about intuitionistic tableaux?</h4> <p>Let's consider that signed formula $\qF \qX$ correspond to the right-hand side of a sequent calculus (the succedent). So unlike Gentzen's LJ, it is quite fine to have multiple formulae mingle on the right-hand side and stay intuitionistic, as long as the right-$\rightarrow$ rule (i.e. $\qF \qX \qimplies \qY$) keeps only one. This has also been explained here: <p>[3] Valeria de Paiva, Luiz Carlos Pereira. A short note on intuitionistic propositional logic with multiple conclusions [<a href="https://www.researchgate.net/publication/228619994_A_short_note_on_intuitionistic_propositional_logic_with_multiple_conclusions">researchgate pdf</a>] <p>The assertion $\qF \qX \qimplies \qY$ says: "at present, we think we can prove $\qX$ without at the same time proving $\qY$". In considering such a possible proof $\qT \qX$, we need to do bookkeeping to ensure that anything we prove along the way is separate from other formulae $\qF \qZ$ that we labeled as not-yet-known. This is why we the special $\alpha$ rule use $\qS_P$ instead of $\qS$. This looks like the essence of intuitionistic reasoning as far as proof rules are concerned. It also makes intuitionistic tableaux a bit more complicated than classical. <h4>Coda</h4> <p>I hope the above sheds some light on the connection between tableaux calculus, Kripke semantics and intuitionistic logic. Soundness and completeness can be found in Fitting's book, as well as generalizations to modal logic, predicate logic. Of course, for stronger logics, carrying over soundness and completeness does not mean we retain decidability; stronger logics require not only taking formulae apart, but also instantiating them, which increases non-determinism and search space.Burak Emirhttp://www.blogger.com/profile/01651698653168460072noreply@blogger.com0tag:blogger.com,1999:blog-28320851.post-27534520026951513202020-09-28T01:35:00.001-07:002020-09-28T04:53:18.794-07:00Intuitionistic Higher-Order Logic and Equality <h1>Intro</h1> <p> I have posted plenty of motivation for considering equality and higher-order logic elsewhere on this blog by now. Let me finally get to the intuitionistic version of higher-order logic, since I <a href="https://blog.burakemir.ch/2020/04/higher-order-logic-and-equality.html" >talked about the classical version previously</a> and then also talked at length about <a href="https://blog.burakemir.ch/2020/05/intuitionistic-propositional-logic-and.html" >"Intuitionistic Logic and Natural Deduction"</a > without really getting closer to the matter. Intuitionistic higher-order logic is what let's you talk about an elementary topos, and using equality to characterize the logical connectives gives an insightful perspective on those commutative diagrams for people who are familiar with $\lambda$ calculus. \( \newcommand{\qeq}[0]{\color{blue} {\mathrm{eq}}} \newcommand{\qSigma}[0]{\color{blue} \Sigma} \newcommand{\qPi}[0]{\color{blue} \Pi} \newcommand{\qor}[0]{\color{blue} {\mathrm{or}}} \newcommand{\tprop}[0]{\omicron} \newcommand{\qtrue}[0]{{\color{blue} {\mathrm{true}}}} \newcommand{\qand}[0]{\color{blue} {\mathrm{and}}} \newcommand{\qimplies}[0]{\color{blue} {\mathrm{implies}}} \newcommand{\qPi}[0]{\color{blue} \Pi} \newcommand{\qfalse}[0]{{\color{blue} {\mathrm{false}}}} \newcommand{\qnot}[0]{{\color{blue} {\mathrm{not}}}} \) </p> <p>I am going to refer to material in <a href="https://blog.burakemir.ch/2020/04/higher-order-logic-and-equality.html" >"Higher order logic and equality"</a> often. Like in that post, there is actually no original material here; I am just gathering definitions that is probably well-known to some grad students and experts (yet nobody calls it standard since neither higher-order logic nor intuitionistic logic are known very well outside certain circles who deal with formal logic and proof assistants and the like). This is my feeble attempt at an exposition. <h1>The Logic of a Topos</h1> <p> We take a $\lambda$ calculus with surjective pairing and a type $\omicron$ which we take as a substitute for "truth values" (topos theory texts use $\Omega$ but we want easy comparison with the classical version linked above, and we spell out $\qPi \lambda x. t$ without setting up an abbreviation $\forall x. t$ and we also spell out $\qSigma \lambda x. t$ without setting up an abbreviation $\exists x. t$). We want to set this up in an intuitionistic manner, which means doing away with the idea that $\omicron$ stands for two truth values, and not using classical logical equivalences to define $\qor$ and $\qSigma$. The terms would then correspond to a particular cartesian-closed category, the <a href="https://ncatlab.org/nlab/show/free+topos">free topos</a>. </p> <p> We again want to define everything in terms of equality $\qeq_\alpha: \alpha \rightarrow \alpha \rightarrow \omicron$. What is this equality doing? In our langugage, it talks about equality of <i>terms</i>, and in categorical logic, terms correspond to <i>arrows</i> and thus equality should talk about equality of arrows. (There is some detail hidden here as our language will also let us talk about "global elements" $1 \stackrel{c}{\longrightarrow} C$ which are written as constant terms $c: C$). We take the following elements of our logical language: $$ \begin{align*} \qtrue:\ & \omicron\\ \qand:\ & \omicron \rightarrow \omicron \rightarrow \omicron\\ \qimplies:\ & \omicron \rightarrow \omicron \rightarrow \omicron\\ \qPi_\alpha:\ & (\alpha \rightarrow \omicron) \rightarrow \omicron\\ \end{align*} $$ </p> <p> ... and <i>define</i> them using the only primitive $\qeq$: $$ \begin{align*} \qtrue &:= \qeq_{\tprop \rightarrow \tprop \rightarrow \tprop} \qeq_\tprop \qeq_\tprop\\ \qand &:= \lambda x:\tprop. \lambda y:\tprop. \\ &\quad \quad \qeq_{\tprop \rightarrow \tprop \rightarrow \tprop} \\ &\quad \quad \quad (\lambda z: \tprop \rightarrow \tprop \rightarrow \tprop. z~\qtrue~\qtrue) \\ &\quad \quad \quad (\lambda z: \tprop \rightarrow \tprop \rightarrow \tprop. z~x~y) \\ \qimplies &:= \lambda x:\tprop. \lambda y:\tprop. \qeq~x~(\qand~x~y)\\ \qPi_\alpha &:= \qeq_{(\alpha \rightarrow \tprop) \rightarrow (\alpha \rightarrow \tprop) \rightarrow \tprop}~(\lambda x : \alpha. \qtrue) \end{align*} $$ </p> <p> These definitions are the same as for classical logic. An intro to topos theory will typically describe $\qand$ and $\qimplies$ using commutative diagrams, but here we are defining the language first. Moreover, the language has only one primitive $\qeq$, though that primitive is instantiated at different types. </p> <h1>False</h1> <p> Now we look at $\qfalse$, which is also defined the same as in classical case: $$ \begin{align*} \qfalse &: \omicron\\ \qfalse &:= \qeq_{\tprop \rightarrow \tprop} (\lambda x:\tprop. \qtrue) (\lambda x:\tprop. x)\\ \end{align*} $$ </p> <p> In a topos theory text, one encounters $\forall v:\omicron. v$, which is shorthand for $\qPi\lambda v:\omicron. v$ and actually expands to the very same definition above. The definition that mentions $\qeq$ explicitly has the advantage that it explains what is going on: it morally says "if there is anything other than $\qtrue$ in $\omicron$, then $\lambda x:\omicron. \qtrue$ and $\mathsf{id}_\omicron$ are not the same." So we have a value $\qfalse$, but it need to be the only thing distinct from $\qtrue$ in $\omicron$. </p> <h1>Not, Or and Exists</h1> <p> On to intuitionistic negation $\qnot$. In intuitionistic logic, we would define negation like this: $$ \begin{align*} \qnot &: \omicron \rightarrow \omicron\\ \qnot &:= \lambda x: \tprop. \qimplies(x, \qfalse) \end{align*} $$ </p> <p> If we consult the classical version $\qnot := \qeq_{\tprop \rightarrow \tprop} \qfalse$, we again find that this is not that different. </p> <p> Finally, let's look at $\qor$ and $\qSigma$. We define these using $\qPi$: $$ \begin{align*} \qor &: \omicron \rightarrow \omicron \rightarrow \omicron\\ \qor &:= \lambda x: \omicron. \lambda y: \omicron. \qPi \lambda r:\omicron. \\ & \quad \quad \qimplies(\qand(\qimplies(x, r), \qimplies(y,r)), r) \\ \\ \qSigma_\alpha &: (\alpha \rightarrow \omicron) \rightarrow \omicron \\ \qSigma_\alpha &:= \lambda x: \alpha \rightarrow \omicron. \qPi \lambda r:\omicron. \\ &\quad \quad (\qPi \lambda z:\alpha. \qimplies(\qimplies(x z, r), vr) \end{align*} $$ </p> <p> We can get an intuitive understanding of these definitions: for $\qor$, if we have a way of obtaining an element $r$ (result) from $x$ and one of obtaining the same from $y$, then also from $(\qor~x~y)$. Similarly for $\qSigma$, if we have a way of obtaining any result $r$ from the "instantiated" predicate $x z$, then also from $\qSigma x$. </p> <p>The definition of $\qor$ in terms of $\qand$ and $\qPi$ is not very different from natural deduction rule for "or elimination": $$ \cfrac{\begin{array}[cc]{} \\ {} \\ A \vee B \end{array} \quad \begin{array}[cc]{} [A] \\ ~~\vdots \\ \phantom{[}C\phantom{]} \end{array}\quad \begin{array}[cc]{} [B] \\ ~~\vdots \\ \phantom{[}C\phantom{]} \end{array} } {C}~\color{blue}{(\vee{\mathrm{\small ELIM}})} $$ <p>Checking the the natural deduction rule for existential elimination and comparing it with the encoding above is left as an exercise.</p> <h1>The end of the beginning</h1> <p>Higher order logic seems to deserve more attention. We have set up a logical system based on $\lambda$ calculus without making use of the Curry Howard correspondence. While the previous discussion focused on classical higher order logic as introduced by Church in 1940, in this post we looked at intuitionistic higher order logic. The interest in the latter is at least very heavily influenced by topos theory. <p>It is slightly confusing that when discussing elementary toposes, one talks of first-order logic when in reality the logic is inherently higher-order (because a topos is cartesian closed). I guess the charitable perspective is what people mean is "you can do everything that you would want to do in formal set theory using first-order logic." <p> The Lambek and Scott book "Introduction to higher order categorical logic" dwells a bit on the fact that it is always possible to get a formulation of the internal language of a topos using equality. Andrews $Q_0$ develops simple type theory / higher order logic using a single equality operator $Q$. Various theorem proving systems have been based on this (TPS and HOL). <p>One could say that the way higher-order logic admits the definition of logical connectives in terms of other is an "encoding" of natural deduction rules. Could one use this idea to build a logical framework, a representation of derivations? Indeed this is what is underlying <a href="https://www.lri.fr/~wolff/tutorials/2014-LRI-isabelle-tutorial/pure.pdf">Isabelle/Pure framework</a> and <a href="http://www.lix.polytechnique.fr/Labo/Dale.Miller/lProlog/">$\lambda$Prolog</a>. <p>It would seem worthwhile to write a proper book that takes a logical perspective on the $\lambda$ calculus, developing higher order logic as language, category theory as semantics and arriving at elementary toposes as models. It would probably feel very elementary and pedestrian in the academic world, but I do think it would be useful. It feels to me that there is so much "background knowledge" to gather the relevant theory on logic and computation that most practitioners will not have the energy (and this says something about the potential for adopting of formal methods). Until such a book arrives (I'm not sure if <i>I</i> would have the energy), I hope my out-of-order blog posts provide some help on the reader's journey. There are certainly a lot of of connections between logic, language, types and semantics that are left to be explored, besides Curry Howard and dependent types. Burak Emirhttp://www.blogger.com/profile/01651698653168460072noreply@blogger.com0tag:blogger.com,1999:blog-28320851.post-25339945965845296172020-09-27T19:37:00.000-07:002020-09-27T19:37:36.386-07:00Programmers love Higher-Order Logic<p> My <a href="https://blog.burakemir.ch/2020/04/higher-order-logic-and-equality.html" >previous take on higher-order logic</a > was classical, channeling Church's simple theory of types and Andrews $Q_0$. In that last post, I was hinting at motivation. So I want to expand a bit now how this stuff can be useful for building software at large. I am posting some ramblings here. </p> <p></p> <h1>Equality is Fundamental</h1> <p> There are two different "levels" at which a programmer deals with some kind of reasoning that involves equality: </p> <ul> <li> while writing code: need to be able to tell when two objects are the same. </li> <li> when designing or arguing that code is correct: reasoning whether behavior like output values conform to a (however implicit) specification </li></ul> <p> In both cases, one needs to be able to identify whether objects (or behaviors or input-output pairs) are "the same" as expected ones. But the latter activity, reasoning about program correctness, happens in some form of logic. Much that has been written and is being written on formal logic is about mathematics and mathematical convention. This is true whether we talk about semantics of programming languages or program specification. </p> <p> But for programmers, there are benefits to knowing formal reasoning even if one is not interested in the foundations of math, axiomatizations of set theory etc. It is no secret that in mathematics as in programming, you run into situations where there are multiple different ways to represent some data, and one needs to chose or translate between those representations. While programmers resign to writing "proto-to-proto" translation over and over (which often are not as trivial as it seems), in mathematics it is customary to leave out the awkward details, introduce "abuse of notiation" and quotienting to keep arguments informal, but concise. </p> <blockquote> The heart and soul of much mathematics consists of the fact that the “same” object can be presented to us in different ways. [1] </blockquote> <p> [1] Barry Mazur. <a href="http://people.math.harvard.edu/~mazur/preprints/when_is_one.pdf" >When is one thing equal to some other thing? </a></p> <h1>Equality in First Order Logic</h1> <p> We take a leap towards "formal systems", where objects are represented using (strings of) symbols, along with strict rules to form statements and derive new true statements from existing ones. </p> <p> A typical definition of a first-order logic language contains constant and function symbols, predicate symbols, logical operators. </p> <p> There is a "design choice" here whether equality is considered as <i>built-in</i>, magically denoting exactly the equality in the underlying sets, or whether equality is a predicate with certain special properties. <a href="https://en.wikipedia.org/wiki/First-order_logic#Equality_and_its_axioms" >Wikipedia"</a > talks about "first-order logic with equality" (when built-in) and first-order logic without equality (when it is not). <!-- <p>A not very obvious "foundational fact" is that when people point at axiom systems like ZFC (or NGB or Ackermann set theory), there is always hidden a commitment towards first-order logic <b>with</b> equality. It does not make sense otherwise. The mindset that is at least encouraged by such a view is that there is some universe of sets and mathematical activity reveals its properties.</p>--></p> <p> When equality does not come built-in, one needs to add axioms. The equality axioms are special, in the sense that they have to be preserved by all the function and relation symbols. This looks like: <ul> <li>$\mathsf{eq}$ is reflexitve, symmetric, transitive</li> <li>if $\mathsf{eq}(x_1, y_1), \ldots, \mathsf{eq}(x_n, y_n)$, then $\mathsf{eq}(f(x_1,\ldots,x_n),f(y_1,\ldots,y_n)$ for all function symbols $f$ of arity $n$ <li>if $\mathsf{eq}(x_1, y_1), \ldots, \mathsf{eq}(x_n, y_n)$, then $R(x_1,\ldots,x_n) \Leftrightarrow R(y_1,\ldots,y_n)$ for all relation symbols $R$ of arity $n$ </ul> <p>With a "defined" equality, is it is well possible to interpret logical statements involving $\mathsf{eq}$ in such a way that two distinct elements are regarded as equal according to $\mathsf{eq}$. <h1>Coding our own equality</h1> <p> When we are talking about practical software development, there is really no hope of defining a platonic ideals; yet when techniques of reasoning, informaticians are in various ways confined to results and methods that formalizing mathematics. </p> <p> Take this example: We'd like our terms to represent free monoid on $X$, so we'd like these terms to satisfy the following equations (or identities): $$s \cdot 1 = s$$ $$t = 1 \cdot t$$ $$(s \cdot t) \cdot u = s \cdot (t \cdot u)$$ <p> Consider definitions in some fictional language to express the "interface" of a monoid: $$ \begin{array}{ll} \mathsf{interface}~ M[X] \\ \quad \mathsf{unit}(): M[X]\\ \quad \mathsf{emb}(x: X): M[X]\\ \quad \mathsf{dot}(s: M[X], t: M[X]): M[X]\\ \quad \mathsf{eq}(s: M[X], t: M[X]): \mathsf{bool} \end{array}$$ </p> <p> Here, $\mathsf{unit}$ stands for $1$, $\mathsf{emb}(x)$ is for embedding $x$ and $\mathsf{dot}$ is the combine operaion in $MX$. If our fictional language has algebraic data types, we can use those to build <i>terms</i> which represent the algebraic theory of monoids: $$ \begin{array}{ll} \mathsf{data}~ \mathsf{Mon}[X] \\ \quad \mathsf{Unit}(): \mathsf{Mon}[X]\\ \quad \mathsf{Emb}(x: X): \mathsf{Mon}[X]\\ \quad \mathsf{Dot}(s: \mathsf{Mon}[X], t: \mathsf{Mon}[X]): \mathsf{Mon}[X]\\ \quad \mathsf{eqMon}(s: \mathsf{Mon}[X], t: \mathsf{Mon}[X]): \mathsf{bool} \end{array}$$ </p> <p> It is now really obvious what these constructors stand for, but we note that while the equality $\mathsf{eqMon}$ returns true whenever two terms are equal when considered as elements of the free monoid, there is at least one more equality relation on terms that is different ("structural equality" or "syntactic equality", which will consider $\mathsf{dot}(\mathsf{dot}(s, t), u)$ as distinct from $\mathsf{dot}(s, \mathsf{dot}(t, u))$). </p> <p> Mathematically speaking, we want to consider the quotient set $\mathsf{Mon}[X] / \mathsf{eqMon}$, and work with equivalence classes. </p> <h1>Formulas = Functions to Truth Values</h1> <p> After all this talk about equality in algebraic theories, we want to turn equality in the context of logic, specifically higher-order logic. </p> <p> In classical logic, a proposition (logical statement) stands for a truth value, and there are exactly two of then (the two-value type "bool"). Boole was the first to associate propositional connectives with algebraic operations. In this perspective, the the truth values are part of the language, even though this was later abandoned as in Tarski-style semantics. </p> <p> Programmers are quite used to having functions return $\mathsf{bool}$. In terms of mathematical logic, our custom-made equality "relation" $\mathsf{eqMon}$ above is in fact a function to the set of truth values. Could we not do away with relations and talk in terms of functions? </p> <p> Yes, we can; using truth values and functions in a formal language of propositions is the essential summary of classical higher order logic (or Church's simple theory of types. There is a more compelling example: if you consider "set builder notation" $$ \{ x \in A \ |\ \phi(x) \} $$ then this describes a subset of $A$. We can choose to describe $\phi$ as defining a (unary) relation, but viewing it as a function to $\mathsf{bool}$ is no doubt more natural in a context like <code>SELECT x FROM A WHERE </code> $\phi$ where "relations" are given by their extensions (as tables) and computation in the form of expressions is available. </p> <p> The point of (classical) higher-order logic is that using the $\lambda$ calculus and truth values, we have everything we need to define the meaning of all logical connectives, and the types prevent logical paradoxes of "naively-untyped set theory". Since the truth values are part of the language, it is possible to define the meaning purely through definitions that involve equality. <p>More on this and classical higher order logic in that last post on <a href="https://blog.burakemir.ch/2020/04/higher-order-logic-and-equality.html" >Higher Order Logic and Equality</a > and the next one I'm going to write in a second, where I talk about the intuitionistic version. </p> <p> I will just end with a note that there is no distinction between "predicate", "relation" and "set" is higher-order logic, and this is a good thing. An expression of type $\iota \rightarrow \omicron$ (where $\iota$ are the individuals and $\omicron$ the truth values) defines a (sub)set of individuals, a predicate on the set of individuals, a unary relation, all at the same time. $$\mathsf{even} := \lambda x:\iota. \Sigma \lambda y:\iota. \mathsf{eq}(x, \mathsf{times}(2, y))$$ $$\mathsf{greater} := \lambda x:\iota.\lambda y:\iota. \Sigma \lambda z:\iota. \mathsf{eq}(x, \mathsf{plus}(y,z))$$ $$\mathsf{lowerbound} := \lambda x:\iota.\lambda y:\iota \rightarrow \omicron. \Pi \lambda z:\iota. \mathsf{implies}(y(z), \mathsf{greater}(z, x))$$ <p> The confusing bit for programmers may be that this thing is called simple type theory when in fact there is only one "sort" of individuals. This makes more sense when considering that it is no problem to come up with multiple "sorts" of individuals but that wouldn't give sets ($\iota \rightarrow \omicron$), sets of sets $(\iota \rightarrow \omicron) \rightarrow \omicron$, sets of sets of sets (and so on) of individuals. It's this distinction of hierarchies of sets/predicates/relations that earn Church's higher order logic the name "simple type theory": only terms with the right types are considered well-formed and meaningful.Burak Emirhttp://www.blogger.com/profile/01651698653168460072noreply@blogger.com0tag:blogger.com,1999:blog-28320851.post-89384527990166060172020-07-19T14:43:00.003-07:002020-07-19T15:19:16.756-07:00Modules, components and their limitsI read <a href="http://www.sheshbabu.com/posts/rust-module-system/">Clear explanation of Rust's module system</a>. The TL;DR the module system in rust is a way to (somewhat) break/hide the link between what things are defined and what source files the things are defined in. A rust module provides control over visibility across source files and makes explicit how references reach across "modules". <p>The post starts with a request to ignore what "module" means in other languages, and this plunged me into thoughts on all the things that are called modules. It is somewhat sad but to be expected that a basic word like "module" (or "package") could never have a standard meaning, despite all these programming languages supposedly built while we have all this PL theory. <p>The more academic vocabulary has "module" as a language concept that comes with a scope and visibility, a way to bundle types and named values. It's somehwat independent of installation, deployment (the only touch point being <a href="https://courses.cs.washington.edu/courses/cse341/04wi/lectures/09-ml-modules.html">support for separate compilation</a>), but - to my knowledge - there is no talk of an "installed-at-a-version" module. So the rust module concept is pretty consistent with that. Arranging a set of modules that fit together is the programmer's problem, not the researcher's. <p>Alas, there is this thing that happens at the level of Rust crates/npm packages, and it is really a different problem. Nevertheless, those things are also called modules in some languages (C++20 modules, Java modules, go modules). <p>In Java, source files in the same directory make a Java <i>package</i> and they come with a way to control visibility. The association between files and packages is rigid. More recently, <a href="http://tutorials.jenkov.com/java/modules.html">Java modules</a> were added to express dependencies between modules, visibility of packages - something like a "packages of packages", but really more on the level of Rust crates. <p>I am reminded of "components" (if anybody remembers component-oriented programming) because deployment is part of this game - the notion of dependencies only makes sense in the context of some installation. But then components used to be about runtime services (in addition to code), object brokers and such and here we would actually like to talk about less dynamic things, just code and functionality in the form of source and/or libraries. Still, components are things that get assembled and exist at versions, so I'd say it's an appropriate, if old-fashioned name. <p>Now, such a (static) component would abstractly describe what npm packages or Rust crates achieve, and also form of Debian development packages (sigh) where you install a C library in binary form and its header files. That gives us even more of a twist now, which is that <a href="https://docs.microsoft.com/en-us/cpp/cpp/modules-cpp?view=vs-2019">C++ 20 modules</a> are really modules in the sense of achieving grouping for visibility and scope (check) but at the same time that whole notion has a huge impact on C++ since it should actually make it easier to deploy C++ components. A relevant quote from the C++20 link above: <blockquote>After a module is compiled once, the results are stored in a binary file that describes all the exported types, functions and templates. That file can be processed much faster than a header file, and can be reused by the compiler every place where the module is imported in a project.</blockquote> <p>So while a C++20 module is language concept, it is foreseeable that one day all self-respecting newly developed libraries will be deployed as (<i>installed at a version</i>) modules. Oh well. There was really no reason to hope that a standard meaning for "modules" and the concept "above" could be justified in any way, across languages. At least, there remains an insight that it really makes sense to deal with the two levels separately. <p>I sure wish we could all call the installed-at-a-version thing "component", and the language concept "modules". Just two paragraphs on how that might sound:</p> <p>Every language needs a module system: a way to deal with grouping, interfaces (in terms of making visible - "exporting" only the interface and keeping the rest internal). Including files was always bad and indicates the lack of a module system. It's when one needs to deal with independently developed, deployed and release units that there comes the problem of dependencies, evolution and versions, and these things are the components. Components are <i>related</i> to modules in that they need to contain at least one of them for talking about the things that are provided and brought in scope. <p>If there is a dependency between versions of components and there are backwards incompatible (!) changes in the depended-upon component (or a <a href="https://www.theregister.com/2016/03/23/npm_left_pad_chaos/">left-pad situation</a>), then this is something that needs to be dealt through changes to the depending component. That isn't something that module systems (or any other language design) can help much with. Burak Emirhttp://www.blogger.com/profile/01651698653168460072noreply@blogger.com0tag:blogger.com,1999:blog-28320851.post-55211794480192506202020-07-18T10:07:00.002-07:002020-07-18T10:07:52.931-07:00Relating partial evaluation, multi-stage programming and macros<p>Here is some quick fact on two metaprogramming techniques. $$ \begin{array}{l} \mathsf{letrec}\ \mathtt{pow}\ =\ \lambda n : \mathsf{Int}. \lambda x : \mathsf{Int}. \\ \quad \mathsf{if}\ (\mathsf{eqInt}\ n\ 0)\ 1\ (x\ * (\mathtt{pow}\ (n - 1)\ x)) \end{array} $$ <p>What can we do with an expression $\mathtt{pow}\ 3$? <h2>Partial evaluation</h2> <p>The "partial evaluation" answer is to inline the definition, and evaluate what can be evaluated, yielding a <i>residual program</i>. $$ \begin{array}{l} \lambda x : \mathsf{Int}. \mathsf{if}\ (\mathsf{eqInt}\ 3\ 0)\ 1\ (x * (\mathtt{pow}\ (3 - 1)\ x)\\ \lambda x : \mathsf{Int}. x * (\mathtt{pow}\ 2\ x)\\ \ldots\\ \lambda x : \mathsf{Int}. x * (x * (x * 1)) \end{array} $$ <h2>Staging</h2> <p>The multi-stage programming answer (or <i>staging</i>) is to rethink $\mathtt{pow}$ as a transformation of code. $$ \newcommand{\qquote}[1]{\color{blue}{[|}#1\color{blue}{|]}} \begin{array}{l} \mathsf{letrec}\ \mathtt{pow}\ =\ \lambda n : \mathsf{Int}. \lambda x : \mathsf{\color{blue}{Code[Int]}}. \\ \quad \mathsf{if}\ (\mathsf{eqInt}\ n\ 0)\ \color{blue}{[|}1\color{blue}{|]}\ \color{blue}{[|}($x * $(\mathtt{pow}\ (n - 1)\ x)\color{blue}{|]} \\ \mathsf{let}\ \mathtt{mkpow}\ =\ \lambda n : \mathsf{Int}. \qquote{\lambda y : \mathsf{Int}. $(\mathtt{pow}\ n \qquote{y})}\\ \end{array} $$ <p>The notation was (probably) not designed to scare you: <ul> <li>$\qquote{\ldots}$ is quasi-quotation. What is between these brackets is <i>code</i> <li>$\$(\ldots)$ is used within a quasi-quoted expression to <i>splice</i> code. </ul> <p>Now $\mathtt{mkpow}\ 3$ is going to give us the code of the desired power function: $$ \begin{array}{l} \qquote{\lambda y : \mathsf{Int}. $(\mathtt{pow}\ 3\ \qquote{y})}\\ \qquote{\lambda y : \mathsf{Int}. $(\mathsf{if}\ (\mathsf{eqInt}\ 3\ 0)\ \qquote{1}\ \qquote{($\qquote{y} * $(\mathtt{pow}\ (3 - 1)\ \qquote{y})})}\\ \qquote{\lambda y : \mathsf{Int}. $\qquote{($\qquote{y} * $(\mathtt{pow}\ 2\ y)})}\\ \qquote{\lambda y : \mathsf{Int}. $\qquote{y} * $(\mathtt{pow}\ 2\ y)}\\ \qquote{\lambda y : \mathsf{Int}. y * $(\mathtt{pow}\ 2\ y)}\\ \qquote{\ldots}\\ \qquote{\lambda y : \mathsf{Int}. y * (y * (y * 1))} \end{array} $$ <p>There can be nested levels of $\color{blue}{\mathsf{Code}}$ and this explains the term multi-stage programming. For objects of this type to be really useful, there should be a method $\mathsf{run}: \color{blue}{\mathsf{Code}[T]} \rightarrow T$.</p> <p><i>Macros</i> are very related to the above: a macro gives the programmer a way to give a definition that is <i>elaborated</i> at compile time. This is similar to what is going on in splicing. The special case is that it is the compiler that calls $\mathsf{run}$; in other words,the context in which a macro is elaborated is the "compile-time stage". The design of <a href="https://dotty.epfl.ch/docs/reference/metaprogramming/toc.html">Scala's metaprogramming facilities<a/> is consistent with all this, and I picked this up from <a href="http://biboudis.github.io/papers/pcp-gpce18.pdf">"A practical unification of multi-stage programming and macros"</a> by Stucki, Biboudis, Odersky.</p> <p>That's all; this is shallow as all my posts and there would be more to say; "Modal analysis of staged computation" by Davies and Pfenning would be a good place to continue for the curious and/or impatient. I'd like to connect these views on staging and macros and modal logic, but that's for another day.Burak Emirhttp://www.blogger.com/profile/01651698653168460072noreply@blogger.com0tag:blogger.com,1999:blog-28320851.post-32584432221490413662020-07-04T01:53:00.003-07:002020-07-04T02:16:52.847-07:00Language, Logic and Modeling <blockquote> Language is the parrot of thought, and a hard to teach one, nothing more. („Sprache ist der Papagei des Gedankens, und ein schwer gelehriger, nichts weiter.“) ― Friedrich Hebbel </blockquote> <p> Here's a short note about a perspective on logic for informatics, and decidedly not mathematics. </p> <p> Formal logic has its roots in formalizing mathematics. Expressing the steps of reasoning in a formal system would assure us that there are no "gaps". I wrote a brief account on <a href="https://blog.burakemir.ch/2020/05/intuitionistic-propositional-logic-and.html#begin">the beginnings of formal logic</a> elsewhere. </p> <p> The 20th century brought with it the advent of machines for information processing. This has gone way beyond mathematics already. In informatics (a better word for "computer science") one deals with information processing, and its practical kin, software engineering, deals with systems of all kinds where information gets in contact with reality. </p> <p> There is a view that holds that we can only understand reality through models (see <a href="https://en.wikipedia.org/wiki/Constructivist_epistemology">constructivist epistemology</a>). Now, whether one subscribes to this or not, it is obvious that informatics relies on models of reality; information itself is always part of some model of reality. If you worked with customers or product managers, you know it is shaky and always incomplete series of models of reality, always adapting to new circumstances. </p> <p> All I am saying is: formal logic can and should help us as a tool for formulate precisely these models. Like in mathematics, we do not want to have "gaps" in our reasoning. This is a somewhat ambitious direction, as software systems are not always built according to specification, and specifications are usually prose. And it is not only about modeling the domain. </p><p> "Functional programming" has gathered a large following in recent years, and I think this is because less because "software should be mathematics", but programmers benefit from constructing their code according to models that leaves no "gaps". The tool at hand, a (statically typed) functional programming languages, forces one to think thoroughly about data and the operations that are going to be carried out. </p> <p> One does not need to resort to philosophy for challenging this. Large software systems are distributed over many machines, and gives rise to challenges that require methods different from those that are mathematics (and different from programs that run on a single machine). While <i>functional requirements</i> lead us to modeling domain concepts, designing a system that meets those is constrainted by <i>non-functional requirements</i>. We need to consider of performance and machine capacity, and evaluate which underlying technologies to build on, what guarantees they provide. It is here that engineering can be meaningfully distinguished from programming, and work is influenced less by theorems, more by best practices and experience. </p> <p> Engineering and domain modeling are full of abstract models (the favorite tool being: diagrams of boxes with their meaning provided by handwaving). Formal logic and actual, manifest models where people could experiment are not encountered very often, yet expressing these models, reasoning with them and testing/verifying them would be critical since mistakes are costly. Moreover, a large number of people have to understand and act in accordance to these models. Often, the only time where this becomes obvious are the APIs between subsystems, a point where much modeling has already happened and cannot be changed. </p><p> I will go further and claim that Conway's Law is a consequence of each team forming their own "unit of modeling". It would be a lot better when we learnt to communicate explicitly and precisely what our models are, explain our needs to model things differently and how we can translate between our different models instead of insisting on some "right" way of seeing something. I place my hopes in logic as a language and tool to do just that. Mathematics is a discipline where the practice of modeling a mathematical "universe" has advanced very far, and it is no wonder that people seeking to use precise methods turn to it. It is a mistake to interpret this field of application of formal logic as some sort of mathematical monopoly. Language is a tool to communicate, a logic is a style of language. </p>Burak Emirhttp://www.blogger.com/profile/01651698653168460072noreply@blogger.com0tag:blogger.com,1999:blog-28320851.post-72773516488673801312020-07-02T01:19:00.004-07:002020-07-02T01:27:58.016-07:00I don't know how to Semantics<p>I missed out my daily writing practice of bad, unfinished, unedited thoughts. So why not combine it with idle thoughts on logic. The title alludes to the friendly LOGO <a href="https://el.media.mit.edu/logo-foundation/what_is_logo/logo_programming.html"> error message</a>, and this exchange on Twitter: <blockquote class="twitter-tweet"><p lang="en" dir="ltr">Semantics can never be given in terms of axioms. Theories are given in terms of axioms. Semantics are given in terms of interpretations and models.</p>— Andrej Bauer (@andrejbauer) <a href="https://twitter.com/andrejbauer/status/1278368000127700994?ref_src=twsrc%5Etfw">July 1, 2020</a></blockquote> <script async src="https://platform.twitter.com/widgets.js" charset="utf-8"></script> <p>Andrej is not wrong, but axiomatic semantics is a thing in PL. So I'm collecting now the two different contexts for the word and idea of "semantics", the mathematical logic, one in programming languages. I am generally trying to embrace multitude of meanings in different contexts, but given the strong connections between PL and logic, it is worth pondering what is going on here. <h1>Linguistic meaning: the writing on the wall</h1> <p>The word "semantics" comes from σῆμα "sign", "marking" and refers to the study of meaning (somewhat associated with linguistics, the study of language). Now meaning a very broad thing to study, and how we create meaning and interpret signs proper (beyond language) leads us to <i>semiotics</i>. I promise I am not going to spiral into ever higher level of abstractions and get back to the more mundane divergences of academic disciplines; still I want to share a story to show the depth of the question "how we make meaning of something" and what meaning really is. <p>Here is an example of linguistic and semiotic meaning. I believe I had heard and understood in English the idiom "the writing is on the wall" long before I heard the <a href="https://en.wikipedia.org/wiki/Belshazzar%27s_feast">biblical story</a>. It is hard to tell, since I do not remember. Maybe I looked up the German word "Menetekel" – which is the same but doesn't sound as cool and portentous – and then forgot about it. Maybe I learned and chose to forget biblical reference. English is full of references to stories that become idioms and detached from their origins (I also learned very recently that "drinking the Kool-Aid" originates from a Jonestown massacre), yet somehow they convey meaning. <p>Conveying meaning in written language can only happen through more written language; however, language is definitely more than what is written. I claim it's possible that even if you never heard this story from the Bible and have no access to it, you can nevertheless learn the full meaning of the saying "the writing is on the wall" when you witness it being used in some situation that discusses the terrible ending to come. Whether that is from the textual, narrative context, or rather from the faces, gestures of the participant or from your understanding of the reality of the thing it is said in reference to, that's hard to tell. <blockquote>Kamus writes how many meanings each word has, then, I wondered and tried to write down a Kamus showing, to how many words a meaning corresponds. (Bediuzzaman Said Nursi) </blockquote> <blockquote> Wisdom is a tree which sprouts in the heart and fruits on the tongue. (Ali ibn Abi Talib) </blockquote> <p>Now, back to our divergent use of "semantics" in mathematical logic and programming languages. <h1>Semantics in mathematical logic</h1> <p>The predominant view in mathematical logic today is Tarski's conception of semantics and truth, <a href="https://plato.stanford.edu/entries/logic-classical/#Sema">adapted to model theory</a>. This is what is taught to undergrads, and I shall like to deconstruct the assumptions that come with it: <ul> <li>Statements are either true or false, or otherwise put, there is an objective notion of truth that we try to capture with logic languages.</li> <li>The meaning for a logical model is given by describing the interpretation of the language in a model</li> <li>The way we communicate or describe a model and the interpretation are well understood.</li> </ul><p>All issues one may have are subdued with the last point. You cannot really question why models are given in "informal set theory" because the whole meaning exercise <i>requires</i> some common ground. If one ever wonders how to give meaning to those things that are supposed to provide the well-understood meaning, one finds that formal set theory is relatively disconnected from what one is trying to do (and turns to type theory). <p>I find it very striking that at the time Tarski formulated his original criterion for defining truth, the lingua franca was higher-order logic (which is equivalent to simple type theory), not set theory. I do feel something was lost there. Classical higher-order logic, we are concerned with mapping to truth values, but point being about the mappings (not the "sets"). So rather than being the "informal set", a grouping of objects by some property, or a formal set which described by an obscure string formal set theory, a "higher-order logic" set is defined by its mapping. A "woman" is defined by a mapping <pre>woman: Individuals -> {true, false}</pre> <p>... and you can argue when it returns true. I find this closer to discussions on meaning in the linguistic sense. <h1>Semantics in programming languages</h1> <p>In programming languages, "semantics" refers to the "meaning" to programming language (this is actually a surprising and confusing statement). Now, "meaning" is something a lot narrower than linguistic meaning. <ul> <li>Something does not have to be universally and objectively true to give "meaning"</li> <li>The conventions we can rely on when giving meaning are entirely different. </li> <li>The way we communicate or describe a model and the interpretation are well enough understood.<li> <p>"Meaning" can be something like a specification that clarifies what statements and expressions of the language do. In practice, this is all informal. In research, there are three "styles": <ul> <li>Denotational semantics. We choose to embrace the conventions of mathematics as our medium for conveying meaning as objective truth</li> <li>Operational semantics. The meaning is in the behavior, the execution of the programs. See what this interpreter does? That is the meaning. <li>Axiomatic semantics. The meaning of a program is properties you can state. </ul> <p>This distinction is entirely standard and helpful, I learnt these as undergrad and e.g. Benjamin Pierce "Types and Programming Languages". It is also largely ignored in industry, where a natural languages specification relying on conventions is all you get for "meaning" (and stackoverflow may cover the gaps). <h1>Finale</h1> <p>Programming actually has its roots in formal logic. It is not a coincidence that the act of setting up formal languages and the burden of having to define meaning is shared by people who mathematical logic (ok, I should say formal logic to not exclude the philosophers) and programming languages, and that informaticians take an interest in proof assistants and machine-checked proof. <p>Observe that all styles leave open a path to full formalization, but for axiomatic semantics it is somewhat odd since we'd have to formalize logic in logic. The boundaries are also not as clear-cut, since one can also describe an interpreter by some mathematical structure (labeled transition systems) and there is als formal methods, verification and the "theory of a program" and how it needs some semantics to build on (not necessarily "the" semantics). The taxonomy is still very useful, though, since it says something about the intention. <p>My personal hobby is to study stacking systems of logic and reasoning, "compositional". Category theory is useful here, precisely because it permits to consistently ignore what is the foundation, and describe everything in terms of mappings (arrows), just like higher-order logic. I shall leave this topic for another post. Burak Emirhttp://www.blogger.com/profile/01651698653168460072noreply@blogger.com0tag:blogger.com,1999:blog-28320851.post-20166079838269374362020-06-29T13:07:00.001-07:002020-06-29T13:07:29.932-07:00Go Karma Schizophrenia<p> Today, I tried to busy myself with doing some programming exercise in go. Not the board game, the language. Here are some idle thoughts on programming languages. </p> <p> I had forgotten how good I can be at procrastination. I drank about a dozen cups of tea (soar throat), checked Twitter and mail... it was terrible. </p> <p> If you studied programming languages, there are many things that seem off about golang. I could even understand if the lack of generics had been justifed as: "we value type inference and structural subtyping higher than having generics." but no, they are on the record as "we never needed them". ok... Opinions on programming language design are a dime a dozen. </p> <p> I do not aspire to exercise my golang skills because it is a "great programming language", or the appeal to simplicity. Following a change of role at work, I simply expect to be dealing with a lot of go code in my job. That, and "modern C++". It's as simple as that. And sure enough, you can get used to anything. </p> <p> When I was a grad student working on a certain programming language, I remember how convinced we were that we'd be changing the world. And indeed we have, but then a whole bunch of others have, too. Golang did not exist back then, and neither did "modern C++". I did have some experience in C++ programming, from peeking at programming courses for math students during uni (absolutely nothing related to actual C++ in my informatics curriculum, though there was a lecture on object-oriented programming) and my two internships. And also learned some C from a friend, one seminar and then many hours of toying around. This changed with my job of course, but the point is I perceived this dichotomy betwen programming as taught and research in academia and programming as done in industry or applied research contexts "with an industrial flavor". As grad students working on a language, we were convinced we have the best language, and determined to make it bestest, but I had seen "the other side". </p> <p> I continued to dabble in C++ in those times, because I was sure I'd need it someday, it's the reality that cannot be ignored when studying programming languages. I described this as "professional schizophrenia": you split yourself into the part of you that accepts the "industry" programming languages, and the part that supports progress based on research which mostly gets ignored. I was then quite surprised when C# 2.0 got generics and Java 1.5 quickly followed suit. It looks like pattern matching makes it into the mainstream, too. </p> <p> Another way to view this discrepancy is pragmatism: you use what's there, things are the way they are due to path dependence, economic pressure or some other reason that was of concern to somebody. Languages are tools, no need for judgment, just move towards the objective. This is my modus operandi, but it feels good to know that a domain-specific language that is using elements of an advanced type system is sometimes the thing that moves you towards that objective. Still, I am surprised how much one can be stuck in "my programming language is my identity", at myself in hindsight but also all the fan discussion. </p> <p> Today, after procrastination, I think there might also be some karma. While I had not been too vocal, I'm sure in a weak moment, I must have said something like "golang is not a good language" aloud. I remember looking at source code that I considered to be looking like line noise (the inline & turned me off), surely I must have expressed my disaffection a few times. After a whole day of pain-avoidance behavior – the other name for procrastination – I realize now that maybe this is what has been hunting me. I have been paying for my remarks, dearly. Wake up, Burak, it's time to do your exercise and show how much of a pragmatic you really are. </p> <p> On the bright side, after all this suffering, I am hoping to be born again as a happy gopher tomorrow. </p>Burak Emirhttp://www.blogger.com/profile/01651698653168460072noreply@blogger.com0tag:blogger.com,1999:blog-28320851.post-56770196493795953402020-06-28T05:09:00.003-07:002020-06-28T05:17:24.237-07:00Nothing is real<div style="width: 80%"><blockquote>We are the children of concrete and steel<br />This is the place where the truth is concealed<br />This is the time when the lie is revealed<br /> Everything is possible, but nothing is real</blockquote><p style="text-align:right"><a href="https://www.youtube.com/watch?v=1HbF3EAt3ck">Living Colour - Type</a></p></div> <p>Ryuichi had to get out. The kids were running around the house and screaming, their juvenile energy levels blending with the boredom of a cloudy Sunday morning to form an explosive mixture. He checked the viral pollution levels and left unceremoniously. It wasn't ok to leave Lingzhao alone with the kids, but still he had to get out or the noise and shouting would get to him and erase all possibility of having thought. He needed space to think, and silence. <p>The streets had not yet filled with people, only a few families strolled, then shining beautiful Black faces gleaming in the African sunlight. The Chromatic Cure had worked its wonders, he thought. After the first "baby pandemic" of 2020 and the 2nd civil war for racial justice came the Dinosaur diseases, caused multiple strains of airborne virus out of the remnants of Siberian permafrost. It affected people with light skin color disproportionately, which had eventually led to a treatment. The skin's pigments were modified to transform UV light into virus-defeating proteins, but it needed Black pigments to work, a lot of them. Ryuichi vaguely remembered from his medical biochemistry class in kindergarten. "And that is why everybody is Black now", the teacher had said. The Unified Society logo on her uniform caught his eye, it was one of his earliest memories. <p>Weekend or not, those remaining pesky white supremacists were still trouble, Ryuichi thought. Their enclaves were on life support from the Unified Society. Large compounds, former prison complexes that had been sealed airtight and determined to hold out, fueled by conspiracy theories and delusion. Endless debates had ensued whether the move to reactivate the old underground supply channels to keep them alive was a justified use of scarce resources. Compassion won out and the prison enclaves became the only refuge for white people who preferred living with other fellow white male supremacists. <p>Ryuichi worked on the team that intercepted electomagnetic signals and gathered intelligence. Two individuals, bugmold and GDJ, had emerged as leaders, one based on ideology and the other seemingly controled the software that kept the modified prison complex working. Very little was known about the fate of women in the prison complexes; there had been many who chose to give up freedom to retain their skin color. But violent fights had erased all but a few, who willingly accepted bugmold's tales of superior intelligence and the day of revenge, and GDJ's programming school of thought in order to keep the automated manufacturing going and launch InfoSec and fake news attacks on the Unified Society. The brutal fights in the White Supremacy enclaves had become a pattern that repeated itself once a decade. They even had a name for this: Natural Selection. <p>Ryuichi found a bench and took a break from his walk. It would soon be time. <p>Compassion also was the argument that led to the Plan being accepted. The intelligence gathered from Ryuichi's group revealed more and more signs that the crooked ideology and its implementation through software protocols led to prisons being a place of crimes against humanity and intolerable suffering, and human rights group revolted. The cure had been perfected and encapsulated; no complicated week-long procedure was necessary anymore, the biochemists of the Unified Society had created a new airborne virus that would set off the process of transforming the skin color. Everybody has a right to keep their skin color, the chairman of the assembly said, as long as it is Black. Even the most ardent supporters of "tolerance" could not bear the tales of natural selection. <p>Ryuichi checked his watch: it was noon. The sealed gray metal boxes with today's supplies to the prisons, delivered synchronously on a global automated schedule, just opened with a click. He smiled and thought, now that White is history, the team will be asking for a new project tomorrow. But it was the weekend, after all. Time to get back and see what the kids are up to.Burak Emirhttp://www.blogger.com/profile/01651698653168460072noreply@blogger.com0tag:blogger.com,1999:blog-28320851.post-51723112765640278682020-06-27T07:18:00.002-07:002020-06-27T07:18:46.642-07:00Gotta not-do what you gotta not-do<p>Today's writing is about <a href="https://en.wikipedia.org/wiki/Wu_wei">wu wei</a> - "effortless action". <p>Wu wei comes from Taoism. I don't know very much about Taoism. I do not have a plan to become a thought leader on Taoism. My only personal connection to Chinese philosophical tradition is that I learned Tae Kwon Do as a teenager and I got curious about the "way" teachers description that the roots of these are not "martial arts" but a "philosophy (or way) of movement". <p>I was intrigued by this idea that "attaining perfection (of movement) through practice" was not for becoming be perfect fighter, but to attain wisdom. I went on to read Tao Te Ching, I Ching, and while I did not understand much, I became sensitive to these topics. [A friend at work, big fan of martial-arts, laughed at me for taking this philosophy stuff seriously - but then, "If the Tao would not be laughed at this way, it would not be the Tao!"] <p>The point is, you don't have to be an philosopher to appreciate that the idea of effortless action is one that is found in many forms and cultures. It's a pattern easy to observe, if you have the eye for it. <p>Take "<a href="https://en.wikipedia.org/wiki/Flow_(psychology)#Professions_and_work">flow</a> or "being in the zone". There are three conditions for flow to occur: <ul><li>Goals are clear <li>Feedback is immediate <li>A balance exists between opportunity and capacity </ul> <p>This resonates with the programmer working on a software artifact, with the amateur musician playing a piece on the piano, when doing sports (I am going to stop here). It also resonates indirectly, think of a manager who wants to create a productive environment for their team. <p>The analytical, self-conscious and doubtful mind takes a back seat and you "become one" with the thing you are doing. There is a view of taoism where this is not only at the individual level, but also at the level of society; this is what is described in the Tao Te Ching and I Ching (and discussed in the wikipedia article on wu wei). <p>It should be intuitive (if somewhat illogical in the realm of words) that "effortless action" requires to put in some effort. The programmer had to learn many things in order to get flow state, hacking away at this code. The musician had to practice countless hours in order to be able to play the piece. That effort is not felt in the moment of flow. You are building up capacity. <p>Building up capacity, if done right, is subject to compounding effects. You don't want to only learn, but also watch your rate of learning. Is there a better method? Am I still working towards the same goal? <p>Now, when we think of "goals" we are slightly at odds with "effortless action" and the Taoist ideal, since setting a goal can already be "forced." "Effortless action" is not forced, and also not goal-oriented. <p>Effortless action and taoism in general are not concerned with values or morality. Taoism features the theme of man's inability to force changes upon society. This is a bit revolting at first sight, since today's society is full of injustice. Black lives matter! <p>Things start making a bit more sense when one thinks of the attempts to shape a society or the world by ideology, this is bound to fail or become unsustainable (let's count capitalism as the ideology of $$$ here). You can have the best law, it is not by itself going to guarantee justice. Or, when you think of some software to help with some process, it is helpful to look at those things that do not need to be specified - because it is hopeless to specify them and you need to rely on "common sense" or some norm or culture. Also related is: "There is no way to peace, peace is the way" <p>The effort on the society level could be framed as "building the capacity and opportunity for justice". So an abstract, non-descript goal, direction, objective like "justice" or "I want to be good at X" is ok and not a goal in the sense of forcing some direction; we call it a goal since we live in times where we have an extraordinary freedom in choosing what we want to do with our lives. Choices do not preclude wu wei. I'm inclined to think that wu wei is related authenticity and focus; not getting distracted in futile matters, not doing something because you feel you have to, not doing something for purely "political" (tactical) reasons, but focus on the essence. Another quote that comes to mind here, "Never mistake activity for progress." <p>There is a lot more to find out about this. I am not an expert on Taoism and I still do not intend to become one; but maybe there's a reminder here for looking at one's own patterns of doing and not doing.Burak Emirhttp://www.blogger.com/profile/01651698653168460072noreply@blogger.com0tag:blogger.com,1999:blog-28320851.post-60983330240856654062020-06-26T03:23:00.002-07:002020-06-26T04:57:10.483-07:00Life has a limit but knowledge has none<blockquote>“Your life has a limit, but knowledge has none. If you use what is limited to pursue what has no limit, you will be in danger.” Zhuangzi</blockquote> <p>If DRMacIver in <a href="https://notebook.drmaciver.com/posts/2020-06-25-13:28.html">Subsetting life</a> can draw an analogy between the limited nature of consciousness / the human condition and how C++ programmers effectively know and use only a fraction of the C++ standard — a little subset of the possibilities — then I can do the same with Zhuangzi's warning and ... something. Thank you, David.</p> <p>I have written on intuitionist mathematics and logic <a href="https://blog.burakemir.ch/2020/05/intuitionistic-propositional-logic-and.html">before</a>, so today it's not going to be about effective computations being terminating ones, potential infinity vs actual infinity. No, let's talk about models.</p> <p>I found this quote in a little notebook of mine, an entry dated 20 years ago. I had an actual habit of writing notes, long-winded streams of consciousness, full of philosophy and reflection; a note on the fleeting nature consciousness next to a bitter account of unrequited love; a seemingly never-ending search. I threw away most of those notebooks, but for some reason I kept this one and found it the other day while clearing out the basement.</p> <p>A model is always a simplified image of something. That something is not necessarily "reality"; sometimes the thing that is simplified is just another model, or construction. When you make a model of anything, you leave out some parts (attributes) and retain others (Herbert Stachowiak calls this <i>Verkürzungsmerkmal</i>, or attribute of abstraction). Constructing models is key activity for the informatician, and I am glad that undergrads <a href="https://files.ifi.uzh.ch/rerg/amadeus/teaching/courses/inf_II_fs10/inf_II_kapitel_02.pdf">get exposed to Stachowiak's model theory</a> (slides in German). <p>Any programmer engages in the business in creating models. Every data structure is part of a model, and data structures serve different purposes. What is right in one context is not appropriate in another. <i>Premature optimization</i> is the adapation of code (and frequently also data) for the purpose of optimizing the performance of some operation, often losing flexibility in the process, and making other operations slow or more expensive. Premature means it's not good and the implicit recommendation is that it is good to keep things general until you know what operations you need.</p> <p>Let's not forget about consciousness and life. As a human, you need to make models, too. You need to understand and interact with your fellow humans, understand the consequences of your actions. This is not easy. For goodness sake, you need to understand yourself. What was it that made me throw those notebooks away? </p> <p>There is this theory that there is only one consciousness. Carl Gustav Jung writes about this in Typologie, as a key pitfall that makes some introverts insufferable (I felt seen) since they tend to confuse their <i>self</i> and that universal consciousness. "One love" stands for universal love and respect for all beings, no distinctions. Religion literally means connecting your self with the One. The vedic religion talks about consciousness. You don't need to reach for religion for accepting a universality of the basic conditions and limitations in which consciousness and thinking occurs, though. <p>But not only life is limited, what each individual knows and is able to do with the knowledge they have, is limited, too. Every human is absolutely unique, not only in the various social or economic conditions they are born in, but also in their continued perspective. I like this opening phrase from Jean Piaget's <a href="https://www.marxists.org/reference/subject/philosophy/works/fr/piaget.htm">genetic epistemology</a> <blockquote>GENETIC EPISTEMOLOGY attempts to explain knowledge, and in particular scientific knowledge, on the basis of its history, its sociogenesis, and especially the psychological origins of the notions and operations upon which it is based.</blockquote> <p>Piaget set out to study the development of children, and ended up with a theory of knowledge. Is it constructed models all the way down? Piaget classifies knowledge into three kinds, phyical, social and "relationships" which he calls logico-mathematical models. Why does he calls the latter thing "logico-mathematical"? I think because mathematics and to some extent logical descriptions of mathematics are used here as synonyms for a kind of knowledge where we are completely free to choose what we are modeling and why; at the same time the models are supposed to be <i>consistent</i>. We want to be sure that "they work", that they are good for something. <p>Now, the human condition <i>isn't</i> only about the subject. Different people have different models, so it is not unusual that programmers end up spending 50% or more of their time resolving their disagreements about models. True progress is when people learn something from each other, bridge differences, teach each other and enable others to overcome their limitations. <p>I don't know how I end up on taoism, but these are the "ten thousand things". Consciousness is not limited, but each individual slice of it is. And there are challenges and limits to collaboration, many of them stemming from social, economical limitations or preconceptions. Every individual does feel the need to belong; this is where rationality itself reaches its limits. <p>I am just going to end here. A lot of disagreements would be easily bridged if we made more of an effort to understand each other: what models, what limitations, what perspectives makes the people say what they say, do what they do. It is not consciousness that is limited; it is its manifestation in the individual — and sometimes just the willingness to take the perspective of the other.Burak Emirhttp://www.blogger.com/profile/01651698653168460072noreply@blogger.com0tag:blogger.com,1999:blog-28320851.post-77251080672581700342020-05-30T16:58:00.001-07:002020-07-01T08:10:22.088-07:00Intuitionistic Propositional Logic and Natural Deduction<p>Here is a primer on intuitionistic propositional logic and natural deduction, along with a brief account of their historical context. The bits of historical context may explain why formal logic is somewhat entangled with the discourse on "foundations of mathematics". I frankly wanted to research and type up my notes on something else, but then started "writing backwards" and thought I'd make this the introductory notes I wish I had before learning about these things. <p>The topics here can be seen as useful background knowledge for anyone learning about sequent calculus, proof theory, Martin-Löf type theory (aka intuitionistic type theory aka constructive type theory), the Curry-Howard-Lambek correspondence with category theory and particularly cartesian closed categories, categorical logic; I may write about these connections at another time, but this is supposed to be an accessible overview. We proceed according to this plan: <ul><li><b><a href="#begin">The beginnings of formal logic</a></b> - a few words on Frege, Russell <li><b><a href="#axiomatic">Axiomatic proof systems</a></b> - a few words on Hilbert systems, with modus ponens as only inference rule <li><b><a href="#intuit">What is intuitionistic logic?</a></b> - a few words on Brouwer, Heyting, Kolmogorov <li><b><a href="#gentzen">Gentzen's natural deduction and sequent calculus</a></b> - Gentzen creates the field of structural proof theory <li><b><a href="#intnj">Intuitionistic natural deduction NJ</a></b> - a presentation of natural deduction NJ calculus for intuitionistic propositional logic <li><b><a href="#natseq">Natural deduction in sequent calculus style</a></b> an alternative presentation that groups assumptions into "judgments" $\Gamma \vdash A$ </ul> <p>The focus on propositional logic is for simplicity, and to build a bridge to my <a href="https://blog.burakemir.ch/2020/04/higher-order-logic-and-equality.html">last post</a> where I briefly described the Curry-Howard correspondence. One can add symbols and rules for quantifiers to obtain first-order predicate logic, and the inclined reader should consult the more detailed texts on these topics, but proof terms would no longer be simply-typed $\lambda$-calculus, and then with higher-order logic there is also a different way to talk about sets of individual objects. <h3><a name="begin">The beginnings of formal logic</a></h3> <p>For ages, mathematics involved theorems and proofs. Yet, the beginning of <i>formal</i> logic is marked by Gottlob Frege's and Bertrand Russell's works. <p>Gottlob Frege described a graphical notation that captured sound reasoning in precise rules. It was the beginning of the logicist program of demonstrating that all mathematics could be carried out in formal logic. Bertrand Russell advanced this program considerably, also inventing type theory in the process. <p>Formal logic, symbolic logic refers to logical statements and reasoning being expressed in a <i>formal language</i>, with <i>symbols</i> representing logical statements and <i>rules of symbol manipulation</i> describing how to obtain new statements from existing ones, preserving validity. Frege and Russell's deductive systems of reasoning are today presented as Hilbert systems (more on those below). <p>Georg Cantor had proposed set theory as a foundational system earlier, but a formal account of set theory had to wait until Zermelo published his axioms. People (including Cantor himself) had found paradoxes in set theory or what they understood to be set theory. The search for the "right" foundations continued, while alternative systems of formal logic were proposed (for example, Schönfinkel's combinatory logic, which does not use any variables). The period that followed the beginnings saw what is today called the foundational crisis of mathematics. <h3><a name="axiomatic">Axiomatic proof systems</a></h3> <p>David Hilbert and his school systematized mathematical logic, the application of formal logic to formalizing mathematics, and "metamathematics". <blockquote>Around 1920, the “Hilbert style” axiomatic approach, as it is often called, was known to everyone and dominated the logical scene [1]</blockquote> <p><a href="https://en.wikipedia.org/wiki/Hilbert_system">Hilbert systems</a> are a uniform presentation of formal deductive reasoning and can be treated as synonymous with axiomatic proof systems. There are "logical axioms" that capture the process of sound reasoning and there are "mathematical axioms" which would capture the mathematical theory we want to formalize. The only inference rules are the <i>rule of substitution</i> and <i>modus ponens</i>. A proof is a sequence of formulas, with each term being either an axiom or derived from a previous terms by application of the inference rules. If formula $A$ is derivable this way this is written $\vdash A$. <p>Later, John von Neumann refined axiomatic proof systems using <i>axiom schemes</i>: axioms schemes contain metavariables, and all instance count as axioms and <i>modus ponens</i>becomes the only rule. Here is such a Hilbert-style axiomatic proof system for propositional logic, due to Jan Łukasiewicz: $$ \begin{array}{l} 1.~ A \rightarrow (B \rightarrow A) \\ 2.~ (A \rightarrow (B \rightarrow C)) \rightarrow ((A \rightarrow B) \rightarrow (A \rightarrow C)) \\ 3.~ (\neg B \rightarrow \neg A) \rightarrow (A \rightarrow B) \\ \end{array} $$ <p>The rule of Modus Ponens lets us infer from $A \rightarrow B$ and $A$ the new formula $B$. Note that the other operators $\vee$ and $\wedge$ are considered as abbrevations; this is referred to as Ockham's razor, since it uses a minimum of primitives $A \vee B$ is turned into $\neg A \rightarrow B$, and $A \wedge B$ is $\neg (A \rightarrow \neg B)$. <p>It is amazing that this axiomatic proof systems is <i>complete</i>: all tautologies can be proven from a few axiom schemes. Many texts on formal logic still use axiomatic proof system, presumably because these permit to talk about formal logic with a minimal amount of machinery for syntax and logical axioms; appealing to the <a href="https://en.wikipedia.org/wiki/Deduction_theorem">deduction theorem</a> and admissible rules hide much of the structure of the proof, which is not considered interesting to begin with; all that matters is the fact that something can be proven. </p> <p>[1] Stanford Encyclopedia of Philosophy <a href="https://plato.stanford.edu/entries/proof-theory-development/">The Development of Proof Theory</a></p> <p>[2] Stanford Encyclopedia of Philosophy <a href="https://plato.stanford.edu/entries/logic-firstorder-emergence/">The Emergence of First-Order Logic</a></p> <h3><a name="intuit">What is intuitionistic logic?</a></h3> <p>L.E.J. Brouwer rejected the idea that logics would be prior to mathematics and proposed <i>intuitionism</i> as philosophy of mathematics; he understood mathematics as mental constructions that come first, before any rendering in logic. </p> <p>We summarize the intuitionist positions as follows: <ul><li>An intuitionist rejects the law of the excluded middle $A \vee \neg A$ when $A$ is a statement about infinite sets</li><li>An intuitionist rejects the idea of an "actual infinite" altogether and regards infinity as "potential infinity", approximable by repeating a process, but never completed.</li><li>An intuitionist aims to base all mathematics on effective constructions that can be verified based on intuition (such as the intuition of being able to carry out a countable number of steps)</li></ul> <p>These restrictions avoid paradoxes of naive set theory, at the cost of rejecting large parts of established mathematical practice. <p>Brouwer's student Heyting [3] and also Kolmogorov [4] formulated axioms of intuitionistic logic as axiomatic proof systems. <p>Their efforts is what enabled further discussion (beyond philosophical disagreement) and defined intuitionistic logic [5]. Today, maybe due to the multiple available proof systems for formal logic, many sources define intuitionistic logic informally by reproducing the "Brouwer-Heyting-Kolmogorov (BHK) interpretation" [<a href="https://en.wikipedia.org/wiki/Brouwer%E2%80%93Heyting%E2%80%93Kolmogorov_interpretation">wikipedia</a>]. This is a description of what is an acceptable proof. We skip a discussion of these because we will be giving natural deduction rules below. The key part is that structure of the proof matters now; the structure contains the constructions that the intuitionist cares about. <p>Intuitionistic <i>logic</i> is only considered one form of constructive logic and constructive mathematics; there are other logics that are compatible with the BHK interpretation, and multiple possible interpretations of the word construction and function, and then I think most mathematicians do not bother to formal logic (while still caring about precise language and rigour). A different system of logic that is motivated by enabling constructive reasoning is Martin-Löf type theory (also called <i>intuitionistic type theory</i> or <i>constructive type theory</i>). <p>[3] Arend Heyting (1930). Die formalen Regeln der intuitionistischen Logik., 3 Teile, In: Sitzungsberichte der preußischen Akademie der Wissenschaften. phys.-math. Klasse, 1930, 42–56, 57–71, 158–169. </p> <p>[4] Thierry Coquand. <a href="https://www.researchgate.net/publication/227266974_Kolmogorov's_contribution_to_intuitionistic_logic">Kolmogorov's contribution to intuitionistic logic.</a> 10.1007/978-3-540-36351-4_2. <p>[5] Stanford Encyclopedia of Philosophy <a href="https://plato.stanford.edu/entries/intuitionistic-logic-development/">The Development of Intuitionistic Logic.</a></p> <h3><a name="gentzen">Gentzen's natural deduction and sequent calculus</a></h3> <p>Gerhard Gentzen was David Hilbert's student and sparked the entire field of structural proof theory with his seminal article and thesis that introduced both natural deduction and sequent calculus - in both intuitionistic and classical variants [6]. <p>Gentzen was not the first to invent natural deduction. Jan Łukasiewicz observed that the axiomatic systems of Frege, Russell and Hilbert do not reflect mathematical practice; mathematicians start their proofs from assumptions, applying "proof figures" to derive new results. His student Stanisław Jaśkowski came up with a calculus of natural deduction, where derivations from assumptions are drawn in boxes [7]. <p>Gentzen pointed out that proofs come with structure: a proof may contain a "detour", and such detours can always be removed, yielding a normal form. In order to demonstrate this, he reformulated natural deduction as the <i>sequent calculus</i> and the normalization through his celebrated cut-elimination theorem. In the sequent calculus, lists of formulae are manipulated (the word <i>sequent</i> is a random word chosen by Kleene to resemble "Sequenz"). Much later, Dag Prawitz proved the corresponding normalization result directly for natural deduction [8]. <p>Natural deduction captures formally and precisely reflect how a mathematician proves things while still admitting a "natural" understanding of what is going on - unlike axiomatic proof systems where structure is artificial and ultimately does not matter as it was obtained by modus ponens and axioms. <p>[6] Gentzen, Gerhard Karl Erich (1935). "Untersuchungen über das logische Schließen. I". Mathematische Zeitschrift. 39 (2): 176–210. doi:10.1007/BF01201353. (<a href="https://gdz.sub.uni-goettingen.de/id/PPN266833020_0039?tify={%22pages%22:[180],%22view%22:%22info%22}">link</a>)</p> <p>[7] Stanisław Jaśkowski (1934) <a href="https://www.logik.ch/daten/jaskowski.pdf">On the Rules of Suppositions in Formal Logic</a> Studia Logica 1, pp. 5–32 (reprinted in: Storrs McCall (ed.), Polish Logic 1920-1939, Oxford University Press, 1967 pp. 232–258) <p>[8] Dag Prawitz (1965). "Natural Deduction : A proof theoretical study" (reissued 2006 Dover) <!-- The naming scheme was: </p><ul> <li>intuitionistic natural deduction NJ and classical natural deduction NK </li><li>intuitionistic sequent calculus LJ and classical sequent calculus LK </li></ul>-->\(\newcommand{\andintro}[0]{\color{blue}{(\wedge{\mathrm{\small INTRO}})}}\) \(\newcommand{\andelim}[0]{\color{blue}{(\wedge{\mathrm{\small ELIM}})}}\) \(\newcommand{\andelimleft}[0]{\color{blue}{(\wedge{\mathrm{\small ELIM/1}})}}\) \(\newcommand{\andelimright}[0]{\color{blue}{(\wedge{\mathrm{\small ELIM/2}})}}\) \(\newcommand{\orintro}[0]{\color{blue}{(\vee{\mathrm{\small INTRO}})}}\) \(\newcommand{\orelim}[0]{\color{blue}{(\vee{\mathrm{\small ELIM}})}}\) \(\newcommand{\implintro}[0]{\color{blue}{(\rightarrow{\mathrm{\small INTRO}})}}\) \(\newcommand{\implelim}[0]{\color{blue}{(\rightarrow{\mathrm{\small ELIM}})}}\) \(\newcommand{\notelim}[0]{\color{blue}{(\neg{\mathrm{\small ELIM}})}}\) \(\newcommand{\botelim}[0]{\color{blue}{(\bot{\mathrm{\small ELIM}})}}\) <h3><a name="intnj">Intuitionistic natural deduction NJ</a></h3> <p>We now look at the calculus Gentzen calls NJ, but for <i>propositional</i> logic. For notation, we use $\wedge$ instead of & for conjunction and $\rightarrow$ instead of $\supset$ for implication. Our formulae are defined as: </p><ul><li> $\bot$ is a formula ('falsehood') </li><li> a propositional variable $p$ is a formula, </li><li> if $A$ is a formula, so is $\neg A$ </li><li> if $A, B$ are formulae, so are $A \vee B$, $A \wedge B$ and $A \rightarrow B$. </li></ul><p></p> <p>We shall need to use inference rules and derivations. An <i>inference rule</i> has the form: $$ \frac{X_1 \quad \ldots \quad X_n}{Y} $$ </p> <p>with the meaning that $X_i$ are <i>premises</i> and $Y$ the <i>conclusion</i> of the rule. In natural deduction, each premise and each conclusion of an inference rules is simply a formula. A <i>derivation</i> is an arrangement of inference rules in tree form; the derived formula is at the bottom and inference rules connect each conclusion to premises above the bar. The derivations built up from (instance of) inference rules this way gives a proof of the formula at the root. </p> <p>The rules are as follows (the notation will be explained below): $$ \cfrac{A\quad B}{A \wedge B}~\color{blue}{(\wedge\mathrm{\small INTRO})} \quad \cfrac{A \wedge B}{A}~\color{blue}{(\wedge\mathrm{\small ELIM/1}}) \quad \cfrac{A \wedge B}{B}~\color{blue}{(\wedge\mathrm{\small ELIM/2})}$$ $$ \cfrac{A}{A \vee B}~\color{blue}{(\vee{\mathrm{\small INTRO/1}}}) \quad \quad \frac{B}{A \vee B}~\color{blue}{(\vee{\mathrm{\small INTRO/2}})} $$ $$ \cfrac{\begin{array}[cc]{} \\ {} \\ A \vee B \end{array} \quad \begin{array}[cc]{} [A] \\ ~~\vdots \\ \phantom{[}C\phantom{]} \end{array}\quad \begin{array}[cc]{} [B] \\ ~~\vdots \\ \phantom{[}C\phantom{]} \end{array} } {C}~\color{blue}{(\vee{\mathrm{\small ELIM}})} $$ $$ \cfrac{\begin{array}[cc]{} [A] \\ ~~\vdots \\ \bot \end{array} }{\neg A}~\color{blue}{(\neg{\mathrm{\small INTRO}})} \quad \cfrac{A \quad \neg{A}}{\bot}~\color{blue}{(\neg{\mathrm{\small ELIM}})} $$ $$ \cfrac{ \begin{array}[cc]{} [A] \\ ~~\vdots \\ \phantom{[}B\phantom{]} \end{array}}{A \rightarrow B} ~\implintro % % -> ELIM % \quad \quad \frac{A \rightarrow B \quad A}{B}~\implelim $$ $$ \cfrac{\bot}{A}~\botelim $$ </p> <p>Each rule preserves validity, which ensures that derivations will produce correct results (<i>soundness</i>). If one does not consider the proof-theoretical explanation of what an inference rule is as sufficient, one needs to talk about models and interpretations. <p> Moreover the collection of rules is systematic in a different way: if we ignore the duplicity of $\color{blue}{(\wedge{\mathrm{\small ELIM}})},\color{blue}{(\vee{\mathrm{\small INTRO}})}$ rules, we can say: for every logical operator $\neg, \wedge, \vee, \rightarrow$ there is one <i>introduction rule</i> and one <i>elimination rule</i>. In a sense, the introduction rule <i>defines</i> the meaning of the operator, and the elimination rule is ultimately a consequence of this definition: it removes precisely what was introduced! These are the "detours" in the structure of proofs, which can be removed. <p>In order to avoid unnecessary complication, we need to convene that in rule $\notelim$, the conclusion $A$ is always different from $\bot$. If we omit the rule $\notelim$, we end up with <i>minimal logic</i>. Let us now take two rules and read what they are saying: </p><ul><li> $\andelimleft$ is the (left) <i>elimination rule</i> for $\wedge$, and says: if you have a proof of $A \wedge B$, then you also have a proof of $A$ </li><li> $\andintro$ is the <i>introduction rule</i> for $\wedge$, and says: if you have a proof of $A$ and a proof of $B$, you may combine them into a proof of $A \wedge B$. </li></ul> </p><p><b>Assumptions.</b> As mentioned, a derivation may <i>start</i> in any formula $A$ at the leafs. In the course of the proof, the assumptions may be <i>discharged</i>, as seen in the inference rules $\color{blue}{(\rightarrow{\mathrm{\small INTRO}})}$, $\color{blue}{(\neg{\mathrm{\small INTRO}})}$ and $\color{blue}{(\vee{\mathrm{\small ELIM}})}$. The discharged assumption is then marked $[A]$ to indicate that the conclusion does not depend on it. The rule $\color{blue}{(\rightarrow{\mathrm{\small INTRO}})}$ says if you can prove $B$ from assumption $A$, then we have proven from $A$ follows from $B$, i.e. $A \rightarrow B$. Note that it is possible that there are several other, undischarged assumptions still present in the derivation, and that the discharged assumption may have occurred multiple times in the subderivation, with all those occurrences being discharged at once. </p><p>Let's go through an example proof of $X \vee (Y \wedge Z) \rightarrow (X \vee Y) \wedge (X \vee Z)$. For this, we will need a derivation that ends in $\implintro$. $$ \cfrac{ \begin{array}[cc]{} [X \vee (Y \wedge Z)] \\ ~~\vdots \\ (X \vee Y) \wedge (X \vee Z) \end{array} }{ X \vee (Y \wedge Z) \rightarrow (X \vee Y) \wedge (X \vee Z) }{\implintro} $$ For the upper part, we need $\orelim$. $$ \cfrac{ \begin{array}[cc]{} {\phantom{X}} \\ {\phantom{\vdots}} \\ X \vee (Y \vee Z) \end{array} \quad \begin{array}[cc]{} [X] \\ ~~\vdots \\ (X \vee Y) \wedge (X \vee Z) \end{array} \quad \begin{array}[cc]{} [Y \wedge Z] \\ ~~\vdots \\ (X \vee Y) \wedge (X \vee Z) \end{array} }{ (X \vee Y) \wedge (X \vee Z) }{\orelim} $$ The two branches are now easy: $$ \cfrac{\cfrac{X}{(X \vee Y)}\orintro\quad\cfrac{X}{(X \vee Z)}\orintro} {(X \vee Y) \wedge (X \vee Z)}{\andintro} $$ $$ \cfrac{\cfrac{\cfrac{Y \wedge Z}{Y}\andelimleft}{(X \vee Y)}\orintro\quad\cfrac{\cfrac{Y \wedge Z}{Z}\andelimright}{(X \vee Z)}\orintro} {(X \vee Y) \wedge (X \vee Z)}{\andintro} $$ </p><p><b>Intuitionistic treatment of negation.</b> What if we treated $\neg A$ as an abbreviation for $A \rightarrow \bot$? Then the the role of $\color{blue}{(\neg{\mathrm{\small INTRO}})}$ is played by $\color{blue}{(\rightarrow{\mathrm{\small INTRO}})}$ and the role of $\color{blue}{(\neg{\mathrm{\small ELIM}})}$ is played by $\color{blue}{(\rightarrow{\mathrm{\small ELIM}})}$. In fact, every derivation in the language with primitive $\neg$ could be rewritten into a derivation for the language where $\neg$ is an abbreviation. Neat! </p><p><b>Normal form.</b> A natural deduction proof is said to be in normal form if there is no detour, i.e. if there is no formula that is both the conclusion of an introduction rule and at the same time a major premise of an elimination rule. This is due to what Prawitz calls the <i>inversion principle</i>: whenever there is an elimination rule which operates on the result of an introduction rule above, then the results of that elimination rule were already present among the premises of the introduction rule. A normal derivation will - roughly - take apart the assumptions at the leaves with elimination rules and then build up the conclusion with introduction rules. (The $\orelim$ rule complicates this a little, since it passes through a formula $C$ from its minor premises; the proof by induction on the structure of derivations will transform such proofs by rewriting them and push the application of $\orelim$ downwards). <p><b>Curry-Howard correspondence.</b> As discussed <a href="https://blog.burakemir.ch/2020/04/higher-order-logic-and-equality.html">last time</a>, we can <i>assign</i> terms of simply-typed $\lambda$-calculus to these derivations. Propositions of IPC are then exactly the types of simply-typed $\lambda$-calculus, with $\wedge$ product types and $\vee$ for sum (coproduct) types. Or, from the vantage point of logic, $\lambda$-terms are <i>proof terms</i>. Normalization of proofs corresponds to normalization of $\lambda$-terms; $\beta$-reduction corresponds to removal of a detour involving $\implintro$ and $\implelim$. </p><p>Still on Curry-Howard, it's a bit less obvious how to interpret $\botelim$ as a typing rule; the answer is that it needs an $\mathsf{abort}$ constant which (similar to throwing an exception) passes control to an imaginary top level function that never terminates. This and more is explained succinctly in Zena Ariola's slides [8], but I do want to point out how this yields a neat programmer's intuition for the convention regarding $\botelim$: since obviously an program $\mathsf{abort}(\ldots(\mathsf{abort}(t)\ldots)$ could always be replaced by a simple $\mathsf{abort}(t)$. </p> <p>[8] Zena Ariola. <a href="https://drive.google.com/file/d/1DQ9uqn_ev6h7Nee9dhiFu1qgNUdO5sxC/view">The interplay between logic and computation</a>. slides from a talk at "Women in Logic 2019". </p> <h3><a name="natseq">Natural deduction in sequent calculus style</a></h3> <p>There is something odd about the above presentation: we have propositions at the leaves of a derivation which are <i>temporarily</i> treated as assumptions and then cease to be assumptions when they are discharged. It is also possible to not discharge them, or discharge only some of them when making a proof step. Here, we present an alternative formulation that makes it clear what happens with assumptions at every inference step. <p>(I thought for a long time that it was type theorists or programming language researchers who invented this, suspecting typographical reasons, but to my surprise I learned that it was in fact Gentzen [9] himself in 1936 who introduced this alternative in the paper where he proved the consistency of Peano arithmetic using transfinite induction.) <p>We shall present now intuitionistic natural deduction "in sequent calculus style". Derivations are still tree-shaped, however instead for formulas, premises and conclusions are <i>sequents</i> of the form $\Gamma \vdash A$ where $\Gamma$ is a (possibly empty) <i>set</i> of formulae. The meaning of a sequent is "from assumptions $\Gamma$, one can (intuitionistically) prove $A$". We write the union of such sets with a comma, omit braces around singleton sets and treat $\neg A$ as abbreviation for $A \rightarrow \bot$ so we don't have to give any additional rules. Then, the rules look like this: $$ \cfrac{\Gamma \vdash A\quad \Delta \vdash B}{\Gamma, \Delta \vdash A \wedge B}~\color{blue}{(\wedge\mathrm{\small INTRO})} \quad \cfrac{\Gamma \vdash A \wedge B}{\Gamma \vdash A}~\color{blue}{(\wedge\mathrm{\small ELIM/1}}) \quad \cfrac{\Gamma \vdash A \wedge B}{\Gamma \vdash B}~\color{blue}{(\wedge\mathrm{\small ELIM/2})}$$ $$ \cfrac{\Gamma \vdash A}{\Gamma \vdash A \vee B}~\color{blue}{(\vee{\mathrm{\small INTRO/1}}}) \quad \quad \frac{\Gamma \vdash B}{\Gamma \vdash A \vee B}~\color{blue}{(\vee{\mathrm{\small INTRO/2}})} $$ $$ \cfrac{\Gamma \vdash A \vee B \quad\quad \Delta, A \vdash C \quad\quad \Theta, B \vdash C}{\Gamma, \Delta, \Theta \vdash C}~\color{blue}{(\vee{\mathrm{\small ELIM}})} $$ $$ \cfrac{\Gamma, A \vdash B}{\Gamma \vdash A \rightarrow B}~\implintro % % -> ELIM % \quad \quad \cfrac{\Gamma \vdash A \rightarrow B \quad \Delta \vdash A}{\Gamma, \Delta \vdash B}~\implelim $$ $$ \cfrac{\Gamma \vdash \bot}{\Gamma \vdash A}~\botelim $$ </p> <p>These rules make it clear that it is now "compulsory" to discharge assumptions. When a rule has multiple premises, the assumptions are combined. By choosing broad enough set of assumptions ahead of time (weakening) the different sets of assumptions can be assumed to be the same. When moving towards sequent calculus, one can do away with the treatment of assumptions as sets and treat them as ordered sequences but with <i>structural rules</i> like exchange and contraction; removing one or both structural rules then yields sub-structural logic and proof theory. <p>[9] Gerhard Gentzen (1936) "<a href="https://gdz.sub.uni-goettingen.de/id/PPN235181684_0112?tify={%22pages%22:[497],%22panX%22:0.469,%22panY%22:0.762,%22view%22:%22info%22,%22zoom%22:0.362}">Die Widerspruchsfreiheit der reinen Zahlentheorie</a>" <h3>Conclusion</h3> <p>We are at the end of our tour; I hope this can provide a good starting point for studying the multiple connections between $\lambda$-calculus, logic and proof theory. <p>Sound reasoning - preferably automated - and some form of formal logic and proof are going to be key when modeling some domain and constructing correct software for it. Programmers are used to deal with precise formulations (programming languages, specs) but only a subset will be interested in the "formalization of all mathematics"; a programmer with a basic understanding of simply-typed $\lambda$-calculus should ideally have a very clear path to understanding logic, proof assistants, type theory - I don't think that this is the case today. <p>The activities in constructing models and software may be similar to mathematics, but they are certainly not the same. Each domain is a mathematical universe, with random and often contradictory pieces of information waiting to be organized in "islands of consistency". Hopefully we will one day reach some widespread understanding and usable tool support for formal reasoning that can serve both practical informatics as well as formalizing mathematics. <p>[If you want to discuss, the <a href="https://news.ycombinator.com/item?id=23365982">HN thread</a> is a great place to leave comments] Burak Emirhttp://www.blogger.com/profile/01651698653168460072noreply@blogger.com0tag:blogger.com,1999:blog-28320851.post-56269490594370324752020-04-02T16:32:00.001-07:002020-07-01T08:10:43.147-07:00Higher-order Logic and Equality.<p>In this post, I want to sketch a particular perspective on $\lambda$-calculus and higher-order logic and Church's simple theory of types. I have a few motivations to write this up. One of them is that as an undergrad (ages ago), upon encountering first-order predicate logic, I had endlessly wondered why one couldn't use "boolean functions" and something like functional programming for logic. It was only much later that I discovered Church's 1940 typed $\lambda$ calculus was in a sense, just that. <p>When people think of the connection between $\lambda$-calculus and logic, the first thing that comes to their mind may well be the Curry-Howard correspondence. Similarly, the first thing that people may think of when hearing type theory is Martin Löf type theory. So another motivation is to have a place that describes higher-order logic a.k.a. simple type theory - which after all was the first appearance of typed $\lambda$ calculus - and how there are multiple ways to relate it to formal logic and proofs. <h3>Preliminary: Natural Deduction</h3> <p>Natural deduction [<a href="https://en.wikipedia.org/wiki/Natural_deduction">wikipedia</a>], in short, is a system of reasoning involving inference rules to derive new true statements from existing ones. What sets them apart from a Hilbert system [<a href="https://en.wikipedia.org/wiki/Hilbert_system">wikipedia</a>] is that logical operators themselves are described using inference rules, proofs are trees (derivations) formed from such rule. <p>In this journey, we present natural deduction rules in sequent calculus style: assumptions are tracked in a context $\Gamma$. <h3>Minimal Logic</h3> <!--In proof theory, it is the structures of the proofs that are the topic of interest. --> <p>Minimal logic [<a href="https://en.wikipedia.org/wiki/Minimal_logic">wikipedia</a>], also referred to as the positive fragment of intuitionistic logic, is a deductive system for propositional logic that does not contain any inference rule for falsehood. Here are the rules: $$ \frac{}{\Gamma, A \vdash A}\quad\quad \frac{\Gamma, A \vdash B}{\Gamma \vdash A \rightarrow B} $$ $$ \frac{\Gamma \vdash A \rightarrow B \quad \Gamma \vdash A}{\Gamma \vdash B}$$ We could have added conjunction $\wedge$ or disjunction $\vee$ or a symbol $\bot$ to stand for a false proposition, along with the convention that $\neg A = A \rightarrow \bot$; it would still be minimal logic. "Full" intuitionistic logic has a rule that goes from $\bot$ to an arbitrary proposition $A$ (<i>ex falso quodlibet</i>); and other rules, like double negation elimitation, get us classical logic. <p>The naming scheme that calls this "minimal" is consistent with Dag Prawitz's "Natural Deduction: A Proof-Theoretical Study." Yet, I believe it is fairly common to call this kind of logic intuitionistic logic, even though there is no rule for $\bot$. <p> The proofs for this logic have a structure that is familiar to all students of functional programming languages. <!-- % % % simply typed lambda calc % % --> <br> <h3>Simply typed $\lambda$-calculus</h3> <p>We now follow the celebrated "propositions-as-types" or Curry-Howard correspondence and assign names (variables) to assumptions, terms (expressions) to proofs, using this language: $$ s, t ::= x \enspace \vert \enspace (\lambda x: T. t) \enspace \vert \enspace(s\ t) \enspace $$ $$ T ::= b_i \enspace \vert \enspace T \rightarrow T $$ <p>Here $b_i$ are some given base types. Our <b>annotated</b> inference rules: $$ \frac{}{\Gamma, x:A \vdash x:A}\quad\quad \frac{\Gamma, x:A \vdash s:B}{\Gamma \vdash (\lambda x:A. s) : A \rightarrow B} $$ $$ \frac{\Gamma \vdash s:A \rightarrow B \quad \Gamma \vdash t:A}{\Gamma \vdash (s\ t):B}$$ A term encodes the entire derivation (or natural deduction proof) of the proposition/type. <h3>Reduction as proof normalization</h3> <p>From the Curry-Howard angle, functional programming is "proof normalization": derivations that contain redundant steps get <b>simplified</b> to <i>smaller</i> derivations. The redundancy is always of the form that part of the derivation needs an assumption $x:A$, and another part of the derivation actually "produces" a term of type $A$. In more detail, consider a derivation $\Gamma \vdash (\lambda x:A. s): A \rightarrow B$ and a derivation $\Gamma \vdash t: A$, which can be combined to yield $\Gamma \vdash ((\lambda x:A. s)\ t) : B$. The detour can be removed by taking every occurrence of $x$ within $s$ and replacing it with $t$. This yields a <i>smaller</i> derivation $\Gamma \vdash s[t/x] : B$. If two derivations stand in this relation, we call them $\beta$-equivalent. $$ ((\lambda x:A. t)\ s) \approx_\beta s[t/x] $$ <p>Here $[t/x]$ is notation for substituting $t$ for $x$, and we hurry to point out that care must be taken that we may need to rename bound variables. Two terms that only differ in the names of their bound variables are $\alpha$-equivalent and we will pretend they are <i>equal</i>. If we used <a href="https://en.wikipedia.org/wiki/De_Bruijn_index">de Bruijn indices</a>instead of variable names, there would be no need for $\alpha$-equivalence. <p>[Digression: the identity above is <b>not</b> an algebraic identity - these are syntactic objects which involve variable binding. Sometimes it is ok to treat lambda-calculi as an algebra, and this leads to combinatory logic, see <a href="https://cstheory.stackexchange.com/questions/9993/what-is-the-point-of-calling-lambda-calculus-an-algebra">neelk's answer on stackexchange</a> and Selinger's <a href="https://www.mscs.dal.ca/~selinger/papers/combinatory.pdf">paper</a>.] <p>If we give a direction to the identity, we obtain $\beta$ reduction: $$((\lambda x:A. t) s) \Rightarrow_\beta s[t/x]$$ We can show that $\Rightarrow_\beta$ is <b>strongly normalizing</b> and therefore terminating reduction relation, and that $\Rightarrow_\beta$ is confluent (Church-Rosser property), so all reductions end up in the same normal form. This yields a decision procedure for the "word problem": to find out whether $s \approx_\beta t$, reduce both to normal forms and check that they are <i>equal</i>. <!--Most of the time, the relation is written $=_\beta$ and contained in equivalence relation which is called "definitional equality".--> <!-- % % % higher order logic % % --> <p>We now shift our perspective, leave the Curry-Howard correspondence aside and look at the theory of <i>equality</i> between these expressions. \(\newcommand{\qeq}[0]{\color{blue} {\mathrm{eq}}}\) <h3>Higher-order Logic, using Equality alone</h3><p>We are going to build a calculus for classical logic, based on our language. Our development here follows the $Q_0$ type theory (due to Peter Andrews). This is a type theory which is close to Church's 1940 simply-typed $\lambda$ calculus, but uses only equality and a description operator. We will omit the description operator. <p>We need two base types: $\iota$ for "individuals" and $\omicron$ for "propositions". Since the logic is classical, every proposition will be either true of false, so $\omicron$ can be seen as the type these "truth values". With this setup, we can view any term of type $\iota \rightarrow \omicron$ as describing a <i>set of individuals</i>, or alternatively, a predicate. This resembles the characteristic function of a set in mathematics and also what a programmer calls a Boolean function. <p>[ The name "higher-order" is appropriate, since this logic lets us also talk about set of sets of individuals, via terms of type $(\iota \rightarrow \omicron) \rightarrow \omicron$ and so on. This use also matches the terminology "higher-order function", which is not a coincidence. Note that you can quantify over "smaller-order predicates", but you can't express a "set of all sets."] <p>The only built-in/primitive notion is a logical constant $\qeq_\alpha: \alpha \rightarrow \alpha \rightarrow \omicron$ which serves as equality operator for any type $\alpha$. Before we build propositions out of logical operators and quantifiers, let us write down types of such operators, with the common convention that $\alpha \rightarrow \beta \rightarrow \gamma$ stands for $\alpha \rightarrow (\beta \rightarrow \gamma)$ the types would be: $$ \newcommand{\qtrue}[0]{{\color{blue} {\mathrm{true}}}} \newcommand{\qfalse}[0]{{\color{blue} {\mathrm{false}}}} \newcommand{\qnot}[0]{{\color{blue} {\mathrm{not}}}} \newcommand{\qor}[0]{\color{blue} {\mathrm{or}}} \newcommand{\qand}[0]{\color{blue} {\mathrm{and}}} \newcommand{\qimplies}[0]{\color{blue} {\mathrm{implies}}} \begin{align*} \qtrue:\ & \omicron\\ \qfalse:\ & \omicron\\ \qnot:\ & \omicron \rightarrow \omicron\\ \qand:\ & \omicron \rightarrow \omicron \rightarrow \omicron\\ \qor:\ & \omicron \rightarrow \omicron \rightarrow \omicron\\ \qimplies:\ & \omicron \rightarrow \omicron \rightarrow \omicron\\ \end{align*} $$ Using again a notational convention that $((r~s)~t)$ is written $r~s~t$, the tautology $A \vee B \rightarrow B \vee A$ is expressed as $\qimplies~(\qor~A~B)~(\qor~B~A)$. <p>What about quantifiers? Since we have $\lambda$, for variable binding, there is no need for other variable binders like $\forall$ or $\exists$. Instead we introduce universal and existential quantification using two higher-order operators: $$ \newcommand{\qPi}[0]{\color{blue} \Pi} \newcommand{\qSigma}[0]{\color{blue} \Sigma} \begin{align*} \qPi_\alpha:\ & (\alpha \rightarrow \omicron) \rightarrow \omicron\\ \qSigma_\alpha:\ & (\alpha \rightarrow \omicron) \rightarrow \omicron\\ \end{align*} $$ So $\forall x .\exists y. x = y$ would be expressed as $\qPi \lambda x. \qSigma \lambda y. \qeq~x~y$. People studying classical logical systems frequently "apply Occam's razor" and try to define as few thing as possible. The $Q_0$ way is to play Occam's razor as much as possible. All logical operators are "implemented" using only $\qeq$. $$ \newcommand{\tprop}[0]{\omicron} \begin{align*} \qtrue &:= \qeq_{\tprop \rightarrow \tprop \rightarrow \tprop} \qeq_\tprop \qeq_\tprop\\ \qfalse &:= \qeq_{\tprop \rightarrow \tprop} (\lambda x:\tprop. \qtrue) (\lambda x:\tprop. x)\\ \qnot &:= \qeq_{\tprop \rightarrow \tprop} \qfalse\\ \qand &:= \lambda x:\tprop. \lambda y:\tprop. \\ &\quad \quad \qeq_{\tprop \rightarrow \tprop \rightarrow \tprop} \\ &\quad \quad \quad (\lambda z: \tprop \rightarrow \tprop \rightarrow \tprop. z~\qtrue~\qtrue) \\ &\quad \quad \quad (\lambda z: \tprop \rightarrow \tprop \rightarrow \tprop. z~x~y) \\ \qor &:= \lambda x:\tprop. \lambda y:\tprop. \\ &\quad \quad \qnot~(\qand~(\qnot~x)~(\qnot~y))\\ \qimplies &:= \lambda x:\tprop. \lambda y:\tprop. \qeq~x~(\qand~x~y)\\ \qPi_\alpha &:= \qeq_{(\alpha \rightarrow \tprop) \rightarrow (\alpha \rightarrow \tprop) \rightarrow \tprop}~(\lambda x : \alpha. \qtrue)\\ \qSigma_{\alpha} &:= \lambda x: \alpha \rightarrow \tprop. \qnot~~\qPi_{\alpha}~~\qnot~x.\\ \end{align*} $$ <p>One advantage of having everything defined in terms of $\qeq$ is that when we give inference rules to explain how reasoning take place, they all involve only $\qeq$. We are using the notation of inference rules to describe an equational theory. The only assertions $\Gamma \vdash r$ that are derived are (more or less - up to $\beta$-equivalence) of the form $\Gamma \vdash \qeq_\alpha~s~t$, but we explicitly write some assumptions that we make about types. Here they are: $$ \frac{\Gamma \vdash s : \tprop \rightarrow \tprop}{\Gamma \vdash \qeq_\tprop~(\qand~(s~\qtrue)~(s~\qfalse))~(\qPi_\tprop~\lambda x : \tprop.~s~x)} $$ $$ \frac{\Gamma \vdash s : \alpha \quad \Gamma \vdash t:\alpha \quad \Gamma \vdash z: \alpha \rightarrow \tprop}{\Gamma \vdash \qimplies~(\qeq_\alpha~s~t)~(\qeq_\tprop~(z~s)~(z~t))} $$ $$ \frac{\Gamma \vdash s : \beta \rightarrow \alpha \quad \Gamma \vdash t:\beta \rightarrow \alpha}{\Gamma \vdash \qeq~(\qeq_{\beta \rightarrow \alpha}~s~t) (\qPi_\alpha \lambda x:\alpha.\qeq_\alpha~(s~x)~(t~x))} $$ $$ \frac{}{\Gamma \vdash \qeq~((\lambda x : \alpha. s)~t)~(s [t/x])} $$ $$ \frac{\Gamma, x: \alpha \vdash r : \beta \quad \Gamma \vdash r[A/x] \quad \vdash \qeq_\alpha~A~B}{\Gamma \vdash r[B/x]} $$ <!-- % % % Explanation why this is a logic % % --> <p>The first rule is an axiom that says $\tprop$ consists of $\qtrue$ and $\qfalse$. This looks like a data type specification for $\tprop$. The second rule is an axiom that says that if two objects are equal then they have the same properties (Leibniz would say "indiscernibility of identicals"). The third rule is an axiom that says two expressions of function type are equal, if they agree on all arguments; this is the extensional definition of function equality. <p>The fourth rule is an axiom for $\beta$-conversion. We're conveniently hiding all the naming stuff again behind $[t/x]$ notation; but essentially this lets us apply $\beta$-reduction as we please. <p>The fifth rule is a proper inference rule - the only actual inference rule, which has a premiss $\Gamma \vdash r$ - that says "equals can be replaced by equals", anywhere in a term. This is where the actual reasoning happens. For example, a proposition $\forall x:\alpha. P$ is encoded as a $\qPi \lambda x:\alpha. P$ and proving it "within the system" is literally showing that $\lambda x:\alpha. P$ is <i>equal</i> to $\lambda x:\alpha. \mathsf{true}$. <h3>Wrapping up</h3> <p>I presented in a notation/style that is different from both Andrews's book or the Stanford Encyclopedia of Philosophy's entry on <a href="https://plato.stanford.edu/entries/type-theory-church/">Church's type theory</a>. It is closer to the way a typed programming languages would be formulated. <p>We have only scratched the surface, but sketched how a whole formal system of reasoning can indeed be based on $\lambda$ calculus playing the role of characteristic function. And it didn't actually involve Curry-Howard correspondence. I like to think of this situation as "multiple levels of logic," and topics like higher-order unification and meta-logic/logical frameworks make use of this. <p>I hope these notes can help raise awareness for what is going when simple type theory and higher order logic. A lot more can be said, but that is for another post. If you are interested to dive deeper, here are some references that talk about higher-order logic. <p> Stanford Encyclopedia of Philosophy's entry on <a href="https://plato.stanford.edu/entries/type-theory-church/">Church's type theory</a> <p> Jean Gallier. "Constructive Logics. Part I: A Tutorial on Proof Systems and Typed λ-Calculi", freely accessible on <a href="https://www.cis.upenn.edu/~jean/gbooks/logic.html">his logic papers page</a>. <p> William M. Farmer "The Seven Virtues of Type Theory" <a href="https://doi.org/10.1016/j.jal.2007.11.001">https://doi.org/10.1016/j.jal.2007.11.001</a></p><p> Peter Andrews "An Introduction to Mathematical Logic and Type Theory: to Truth through Proof" (2002) <p> Alonzo Church. "A Formulation of the Simple Theory of Types". J. Symb. Logic, 5 (1) (1940) <p> Dag Prawitz. "Natural deduction: A Proof-Theoretical Study" (2006) Dover - new issue of the celebrated 1965 work. Burak Emirhttp://www.blogger.com/profile/01651698653168460072noreply@blogger.com2tag:blogger.com,1999:blog-28320851.post-73555573722850273262019-11-26T06:46:00.000-08:002019-11-26T06:46:02.598-08:00Getting from TikZ to PNGFor things like papers, tutorials, books, LaTeX is still where it's at, but for HTML I'd like to be able to do things like commutative diagrams... how to make this work without duplicating effort? My LaTeX foo has become rusty over the years, but despite much searching, there seems to no alternative. So here is a little note on how to TikZ pictures to PNG.<br /><h2>TikZ for drawing diagrams</h2>TikZ (TikZ is kein Zeichenprogramm) is probably the most powerful LaTeX package to graphics. There are supporting packages for commutative diagrams.<br /><br /><h2>Exporting TikZ to PNG</h2>After some searching I found the following way: use the "standalone" document class...<br /> <pre><br />\documentclass{standalone}<br />\usepackage{tikz}<br />\usetikzlibrary{cd}<br />\begin{document}<br />\begin{tikzcd}<br /> & X \arrow[dl] \arrow[d, dotted] \arrow[dr] & \\<br /> A & A \times B \arrow [l] \arrow{r}& B<br />\end{tikzcd}<br />\end{document}<br /></pre> ... together with the convert tool (imagemagick): <pre><br />product.png: product.tex<br /> pdflatex $<<br /> convert -density 600x600 product.pdf \<br /> -quality 90 -resize 1080x800 -strip $@<br /></pre> <h2>The product</h2> And this is what we get: <br /> <div class="separator" style="clear: both; text-align: center;"><a href="https://4.bp.blogspot.com/-zTmCx9HmQP8/Xd05cKpFJJI/AAAAAAAAZqM/LPaEBaHYpkM5uQIwnqBLW2zB8ovYc1GIACNcBGAsYHQ/s1600/product.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" src="https://4.bp.blogspot.com/-zTmCx9HmQP8/Xd05cKpFJJI/AAAAAAAAZqM/LPaEBaHYpkM5uQIwnqBLW2zB8ovYc1GIACNcBGAsYHQ/s320/product.png" width="320" height="129" data-original-width="1080" data-original-height="434" /></a></div>Burak Emirhttp://www.blogger.com/profile/01651698653168460072noreply@blogger.com0tag:blogger.com,1999:blog-28320851.post-31997124264281864172018-05-10T08:11:00.002-07:002018-05-10T08:11:15.616-07:00Taking a Glance at Containers, Packages and DevelopmentTime to dust off this blog. <p>Long time ago, I had worked on (yet another) open source implementation of a RPC plugin (code generation + client/server framework) for handling the "services" that are part of protobuf. I had briefly looked at existing C++ solutions, but I remember I primarily cared about finding out what it would take. I looked for ages for a suitable build tool, ultimately settling on gyp and ninja. And then, I still needed to handle dependencies. <p>At some point I must have lost interest, many years have passed, <a href="https://bazel.build">Bazel</a> was meanwhile open-sourced. What (subjectively) changed more than anything else was that the world has moved on, docker containers, cloud setups are becoming more the standard than the norm, the word "DevOps" was invented, kubernetes is a thing. So people must have a way to deal with APIs. There is something called <a href="https://swagger.io">Swagger</a> which can document APIs and also generate client and server-side code, with many languages and targeting language-specific frameworks... <h2>Docker for Development</h2> <p>Apart from the revolution that this means for operations and delivery, this led me to think that a key development story is addressed this way, namely declaring dependencies and integrating them into the build process. Indeed I found a <a href="https://medium.com/travis-on-docker/why-and-how-to-use-docker-for-development-a156c1de3b24">nice article by Travis Reeder</a> that describes exactly the advantages, along with helpful examples and command line references. <h2>Build tools, Modularity and Packages</h2> Modularity is pretty much essential to serious development. I remember from university times that academic view on modularity seemed primarily from an angle of programming languages (in the old times, "separate compilation", later, through type systems). Nowadays, nobody considers providing a programming language implementation in isolation, for instance Rust has the notion of <a href="https://doc.rust-lang.org/book/first-edition/crates-and-modules.html">crates and modules</a>, and what's more is that the cargo tool to manage crates is mentioned in the tutorial. npm would be another good example. In Java-land, there is a distinction between libraries on the class path and managing external dependencies using build tools like maven or gradle. <p>Just to state the obvious: the old view of considering modularity maybe as a low-level problem (linking and type systems) ignores the fact that a "modern" build tool has to handle package/module/crate repositories (multiple of them). On the other hand, the emergence of such repositories (plural) has its own problems, when "left-pad" package was pulled and broke many people. "<a href="https://www.google.ch/search?q=npm+disaster">npm disaster</a>" is an entertaining search query, but it does lead to not-easily-resolved questions on the right granularity of modules/packages. <h2>Image or Container?</h2> With Docker as an emerging "modularity mechanism", it is very interesting to observe that in conversation (or documentation) people confuse docker image with a docker container. If you are confused, <a href="https://nickjanetakis.com/blog/differences-between-a-dockerfile-docker-image-and-docker-container">this post</a> will get you on the right path again. <p>There is a parallel between a "module" as something that can be built and has an interface, and "module" as something that is <b>linked</b> and part of a running system. Since a container is a whole system in itself, networking is part of its interface. When building a larger system from containers as building blocks, the problem of connecting these requires tackling "low-level" configuration of an entirely different kind (see <a href="https://kubernetes.io/docs/concepts/cluster-administration/networking/#docker-model">Kubernetes docs</a>) than what "module linking" meant in the 80s. Still, I'd consider this a parallel. <h2>Where are the best practices?</h2> A <a href="https://www.google.ch/search?q=site:github.com+"Dockerfile+at+master">quick search</a> reveals that indeed there are 53k+ projects have Dockerfiles, including e.g. <a href="https://github.com/tensorflow/tensorflow/tree/master/tensorflow/tools/docker">tensorflow</a>, which explicitly mentions deployment and development Docker images. So while initially, I thought I had not slept through much essential changes, it seems on the contrary that things are happening at an entirely different level. What conventions should one follow, what are the pitfalls? <p>Fortunately for me, there are smart people good at posting articles about these things, so I can appreciate the problems and tentative solutions/best practices mentioned in <a href="https://www.weave.works/blog/kubernetes-best-practices">Top 5 Kubernetes Best Practices</a>. Hat-tip to the "Make sure your microservices aren’t too micro" advice. <h2>D.C. al Coda</h2> How can there be an end to this? Containerization seems to be a perpetual journey. At this point, I would have liked to offer tips based a "typical" example of a development project and how one would use docker to set up. However, I am not very active in terms of open source development, and the one project I want to get back to most isn't a developer tool at all. Fortunately, I believe best practices for "docker for development" aren't going to be any different from the best practices for using docker in general. <p>There is an interesting point where, in order to build a container image, one has to still solve the problem of gathering the dependencies. So the problem of gathering dependencies at build time has only been shifted on level (up? down?), though the constraints are very different: an organization can find easy ways to make docker images available to their developers, and only the team that takes care of providing the docker image has to gather the dependencies. This appears to me much more sane than having a development workflow that makes connections to external repositories on the developers box at build time. Burak Emirhttp://www.blogger.com/profile/01651698653168460072noreply@blogger.com0tag:blogger.com,1999:blog-28320851.post-15341213256207916862015-06-19T18:30:00.000-07:002015-06-19T18:30:05.703-07:00Web Components and Dart Frameworks<p>Recently, I visited the scientific software and data management (SSDM) group at ETH to give a talk about web components and dart frameworks. Here is a summary. <h2>Looking back: a brief History of Web Apps</h2> We live in interesting times: there are many frameworks, languages out there, and in order to understand the context and why things are how they are, it helps to look at the history. The web as a platform for applications has evolved in an erratic process, which I will divide into three broad phases: <ul> <li> the <b>"Middle ages"</b>: content generated on the server-side, Java born with the promise of applets, Javascript launched as a tool for web designers (read: non-programmers), browser wars [1]. Client-side coding was a matter of achieving visual effects, client-server communication happened through forms and page-loads. <li> the <b>"Renaissance"</b>: asynchronous Javascript[2] becomes ubiquitous, client-server communication is now decoupled from forms and page-loads. Rich client-side apps like GMail or Google Maps demonstrate its use. <li> <b>"Modern History"</b>: high-performance layout and javascript engines are the norm. <a href="https://en.wikipedia.org/wiki/HTML5">HTML5</a> delivers a reinterpretation of browser documents and APIs, tailored to web applications development. Mobile devices get high resolution screens and useful browsers. </ul> <p>HTML5 continues to evolve, and not all HTML5 technologies are not supported on every browser (so developers check on <a href="http://caniuse.com">caniuse.com</a>). The idea of a <b><a href="https://remysharp.com/2010/10/08/what-is-a-polyfill">Polyfill</a></b> is born: adapter code that simulates functionality that does not exist natively in the browser. <h2>Web Components</h2><p>With application development on the web becoming the norm, the traditional questions of software engineering arise: how to approach development of large systems? how to encapsulate and reuse? Web components [3,4] offer an answer. </p><p>Web Components are four technologies that together achieve reusable units of behavior: <ul> <li>custom elements</li> <li>templates</li> <li>shadow DOM</li> <li>HTML imports</li></ul> <p>These technologies are being standardized and implemented in a few browser. Even when not available natively, there is the possibility of using polyfills. </p><p>As with any web technology, there are good documentation resources available on the web that explain these. In the following, I'll just include a simple example for completeness.</p> <h3>Defining a new custom element</h3> <p>Say we want to write a bug-tracking system that displays a number of bug entries. We'd like to treat the entries as we would treat other HTML elements (note the fancy new HTML5 syntax): <pre><br /> <bug-entry bug-id=1e4f bug-status=open><br /></pre> <p>This is possible with custom elements, we'd want to register the element like so: <pre><br />// Create a new object based of the HTMLElement prototype.<br />var BugEntryProto = Object.create(HTMLElement.prototype);<br /><br />// Set up the element.<br />BugEntryProto.createdCallback = function() {<br /> ...render bug entry, e.g. with img, click handler/link<br />}<br />// Register the new element.<br />var BugEntry = document.registerElement('bug-entry', {<br /> prototype: BugEntryProto<br />});<br /></pre><h3>Filling the element with content</h3> Most of the time, custom elements will actually correspond to some other markup. With HTML templates, it is possible to include template markup in documents and instantiate it. <pre><br /><template id=”bug-entry-template”><br /> <tr><td><img class=”bug-image”></td><br /> <td><div class=”bug-title”></td></tr><br /></template><br /><br />BugEntryProto.createdCallback = function() {<br /> … <br /> var template = document.querySelector('#bug-entry-template');<br /> var clone = document.importNode(template.content, true);<br /> …<br />}<br /></pre><h3>Specifying layout and style of the content</h3>In HTML, layout and style is specified using CSS, however the specification is global. In order to have a self-contained, encapsulated unit of code, we need some mechanism that enables reuse without worrying about style. This is what is achieved with the shadow DOM: content can be added to the DOM in a way that keeps it separate from the rest of the DOM. <pre><br />BugEntryProto.createdCallback = function() {<br /> <b>// create shadow DOM <br /> var root = this.createShadowRoot();</b><br /><br /> // Adding a template<br /> var template = document.querySelector('#bug-entry-template');<br /><br /><br /> var clone = document.importNode(template.content, true);<br /> …<br /> <b>root.appendChild(clone);</b><br />}<br /><br /></pre><h3>Importing the new component</h3>The sections above have defined a component. Now we would like to be able to use the component. HTML imports provide a way to import components easily: <pre><br /><link rel="import" href="bug-entry.html"><br /></pre> <h2>Dart Frameworks</h2> <h3> Polymer </h3>The Polymer.js library provides many things: <ul><li>Polyfills to get web components in browsers that don't support them <li>Many ready-to-use components that follow Material Design UX specifications</li><li>Tools like vulcanize that translate away HTML imports for the purposes of deployment</li></ul> <p>The polymer.dart library is a wrapper around polymer.js that makes the polyfill as well as the components available in dart. It also offers an easy, integrated syntax which is mainly possible because of reflection. Instead of using vulcanize, users of polymer.dart rely on the polymer transformer which translates away the use of reflection and also takes care of combining all the imported components into one unit of deployment. <p>There is a bottom-up development approach suggested by the availability of polymer: start building basic components and assemble them into every larger ones until you have an application. It is important to note that polymer is not an application development framework: things like logical pages, views, routing are not standardized, but can be provided by components. <h3> Angular, Angular.dart and Angular2 </h3> Angular existed before web components and polymer. It was always a framework for application development, so provides facilities for pages, views, routing. The dart version of angular is significantly different from the js version, again because the use of reflection permits a nice style that uses annotations. Angular does not work seamlessly with web components defined in polymer, although it is possible to make the two things work with a bit of glue code. <p>The newer version, angular2, is backwards incompatible with angular (though concepts are similar). It aims to provide seamless integration of web components (as defined by the standard). It is noteworthy that annotations are advocated as the primary means to set up angular2 components. <h2>Conclusion</h2> Web development has become even more interesting with the promise of reusable, encapsulated components. Work on these is still in progress, but I for one am looking forward to more structured web applications that can draw from a large set of reusable components. <h2>References</h2> [1] <a href="http://www.infoworld.com/article/2653798/application-development/javascript-creator-ponders-past--future.html">Brendan Eich interview in InfoWorld, 2008</a><br>[2] <a href="https://en.wikipedia.org/wiki/Ajax_(programming)">https://en.wikipedia.org/wiki/Ajax_(programming)</a><br>[3] <a href="https://developer.mozilla.org/en-US/docs/Web/Web_Components">Web Components description in Mozilla Developer Network</a><br>[4] <a href="https://en.wikipedia.org/wiki/Web_Components">https://en.wikipedia.org/wiki/Web_Components</a><br>[5]<a href="http://jonrimmer.github.io/are-we-componentized-yet/"> Are we componentized yet?</a>Burak Emirhttp://www.blogger.com/profile/01651698653168460072noreply@blogger.com0tag:blogger.com,1999:blog-28320851.post-61156499459496580992015-03-31T14:47:00.005-07:002020-12-22T05:36:54.287-08:00Educational Programming Environments for ChildrenIf one considers programming a passion, then one can endlessly search for the "essence" of programming. Sooner or later, one faces the question: how can something as abstract as programming be taught to children? <b>What</b> will we teach our children? <p>This is a broad topic, so I will focus on describing some programming environments that I saw being used at educational events at the Google Zurich office. The usual disclaimer applies: the opinions herein reflect only my personal point of view, not necessarily that of Google. <h2>Background and Setup</h2> <p>At the Google Zurich office, we occasionally organize events that teach (or rather: expose) programming to children. The occasion / program and audience were slightly different each time but what is common is the goals of supporting STEM education, closing the gender gap in CS, and at one of the events also giving kids an impression of what their parents are doing ... basically showing them that this stuff is fun. Note that this wasn't a controlled experiment of any sort - my comparison of the environments we used is more an afterthought. <p>I have participated in a few of these as facilitator and seen how three educational programming environments were applied in practice and I want to share a quick overview in the hope that it is useful for future events of this kind. These were: <ul><li>Logo,</li><li>Agent Cubes, and </li> <li>Scratch.</li> </ul> <p>All three environments were used in their online variant in the context of educational events organized for groups of kids: 30-40 Chromebooks and a bunch of volunteers to help in case of questions. Many times, a single child would get a machine of their own, but we also did groups of 2-3. Sometimes we would spend some time briefing the volunteers, but this was pretty minimal. <p> <h2>Logo</h2> This event was a 2h workshop and took place in November 2012. I was the main facilitator for it, and had previously written a logo interpreter <a href="http://www.burakemir.ch/arrowlogo/ArrowLogo.html">ArrowLogo</a> in my 20% time (*). I have also used it to discuss programming in a 30 minute session with a group of school children visiting the office in January 2015. <div class="separator" style="clear: both; text-align: center;"><a href="http://3.bp.blogspot.com/-APfDPFc8XsM/VRfTDPw-cvI/AAAAAAAAJqk/bi6lVJsDu7w/s1600/Screen%2BShot%2B2015-03-29%2Bat%2B12.24.36%2BPM.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" src="http://3.bp.blogspot.com/-APfDPFc8XsM/VRfTDPw-cvI/AAAAAAAAJqk/bi6lVJsDu7w/s320/Screen%2BShot%2B2015-03-29%2Bat%2B12.24.36%2BPM.png" /></a></div><p style="font-size: smaller; font-color:lightgray">(*) I used to code in logo when I was a kid myself. There are other, Java-based Logo implementations available e.g. <a href="http://xlogo.tuxfamily.net">xlogo</a>, but note that using these would require installation of a desktop app, or enabling Java plugin in a web browser. Installing desktop apps is often not an option (e.g. Chromebooks won't allow this), and enabling the Java plugins often turned out to be a security hazard in the past. </p> <a href="http://en.wikipedia.org/wiki/Logo_(programming_language)">Logo</a> was invented by <a href="http://en.wikipedia.org/wiki/Seymour_Papert">Seymour Papert</a> and others in 1967. The design of Logo is deeply rooted in <b>constructivist learning</b> and his Papert's book "Mindstorms: Children, Computers, and Powerful Ideas" is a very accessible resource that explains well how its creator viewed it as a tool not to teach programming but to approach education in general. <p>The summary is this: children are <b>exploring</b> the environment, essentially teaching themselves. The "teaching yourself" part will resonate with every programmer. LOGO achieves this by providing a facility for turtle graphics: kids need to instruct the turtle where to move in order to create paintings that way. This is simple procedural (imperative) programming. <p>For practical purposes, in a short workshop one will usually want to provide exercises to give some structure to the session; I was able to use <a href="http://www.abz.inf.ethz.ch/primarschulen-stufe-sek-1/unterrichtsmaterialien/">LOGO teaching resources from ABZ</a>, a group at ETH Zurich that is promoting informatics education for young kids. For beginners, it is very easy to come up with turtle graphics exercices: you can draw triangles, boxes, houses and move on to circles, flowers, filled box, colored paintings. <p>Something notable about Logo is that the childrens' activity is very close to what programmers do: manipulate source code in textual representation, run, even "debug". It also means that some basic typing skills and being able to navigate a very simple graphical user interfaces with a mouse are requirements. <p> Learning the ArrowLogo environment is very easy because it is very basic: there is a graphics panel, some place to write the code, and some commands to control what is happening, and that is it. The turtle graphics provides immediate feedback, and it is very easy to start over, giving children motivation to sort things out by themselves. After a short lead time, children seemed to be able to use the environment fine. <p>On the other hand, there are no fancy images or sounds, and building interactive things or games was out of scope. Also, ArrowLogo does not provide a way to save programs or take them home; since most programs written in this short time where basic and easy to reproduce, this did not seem to be a shortcoming. <h2>Scratch</h2> This even took place in March 2015, the audience was young girls (ages 10-12) and each of them had their own chromebook. They accessed <a href="scratch.mit.edu">scratch.mit.edu</a> with profiles that were created ahead of time (some confusion arose since some tried to treat their scratch profile intro as a (Google-) account). <p>Scratch is developed by the MIT media lab, which is a successor to the group that created Logo. Scratch offers a visual programming editor: kids can create and customize actors and control them through scripts. The scripts are not coded as textual source code, but by combining blocks by using drag-and-drop. <div class="separator" style="clear: both; text-align: center;"><a href="http://1.bp.blogspot.com/-tzGP7N_3o-Q/VRfTj3VS7eI/AAAAAAAAJqs/whIX6NLGcG0/s1600/Screen%2BShot%2B2015-03-29%2Bat%2B12.26.56%2BPM.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" src="http://1.bp.blogspot.com/-tzGP7N_3o-Q/VRfTj3VS7eI/AAAAAAAAJqs/whIX6NLGcG0/s320/Screen%2BShot%2B2015-03-29%2Bat%2B12.26.56%2BPM.png" /></a></div><p>The user interface is complex. There is almost no typing needed, except for entering values; however a good command of a mouse and a good understanding of graphical user interface is essential and this was clearly a problem for some of the participants. <p>Scratch is fascinating for kids, because it enables them to do game or animation development. However, it is also a moderately complex environment to learn, and if one is not ready to deal with this complexity, then the insights into programming that children can get are quite limited. <p>I showed kids how they could add keyboard control for their actors or automate their movement, but I feel that at least with some part of the childen, there was no understanding for what is happening and why, and no engagement. With such a complex environment, it is easy to fall back into an "instructionist" pattern and tell kids every single step they need to take and ask them to mechanically repeat it. Alternatively, one can accept that some of the participants will be distracted by looking at different shapes, being obsessed with drawing things or playing sounds. <p>Kids could save their work, take their profile details home and continue. Though many said they would like to continue, I have some doubts. <h2>Agent Cubes</h2> <a href="http://www.fhnw.ch/people/alexander-repenning/profile">Alexander Repenning</a> visited Google to run an "Hour of Code" workshop in May 2014, using his software <a href="https://www.agentcubesonline.com">Agent Cubes Online</a>. <p>Just as in scratch, the user can add agents and define behavior for them using rules. The focus is clearly on game design, and it is really easy to clone agents (e.g. think of the many trucks in the frogger game, all running the same code). The user interface for Agent Cubes Online however is even harder to get used to than scratch. There are too many buttons, most of them are too small. Like in scratch, there is a multitude of things that can be done, like draw your own agents. <p>The 'hour of code' was very structured: small demonstrations by the teacher would alternate with periods were kids were free to replicate the steps. Clearly, we have left any constructivist pretensions behind since the purpose is game design at all cost. <p>I think the level of engagement was clearly there, but it is hard to tell whether children would be able to "make it their own" later. Certainly, some kids enjoyed playing with the thing, but I don't think the majority of them thought of themselves as programmers or would return to game development. <p> <div class="separator" style="clear: both; text-align: center;"><a href="http://3.bp.blogspot.com/-k5TYbdXdHKs/VRr4ttrb1-I/AAAAAAAAJs8/couAPqnSLUU/s1600/Screen%2BShot%2B2015-03-31%2Bat%2B9.42.11%2BPM.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" src="http://3.bp.blogspot.com/-k5TYbdXdHKs/VRr4ttrb1-I/AAAAAAAAJs8/couAPqnSLUU/s320/Screen%2BShot%2B2015-03-31%2Bat%2B9.42.11%2BPM.png" /></a></div> Kids were able to save their work to their profile and could continue their work later. This required registering with their email address at the end of the workshop. <p><h3>Conclusion: use code.org</h3> My personal "conclusions" from all these were the following: <ul> <li>for kids, visual programming can be quite helpful, because it is appealing and easier to use. I am looking forward to seeing such a session with a touch screen interfaces</li> <li>there seems to be a wide spectrum between turtle graphics and game development. "Flashiness" seems to be a factor in getting some kids engaged; or as the game development motto goes "[audiovisual] content is king" <li>ArrowLogo is seriously lacking facilities for developing interactive things or animations. </ul><p>While doing some research, I found there is a more convincing solution in this design space. The Google-developed <a href="https://developers.google.com/blockly/">blockly</a> library for visual programming has been used at the <a href="http://code.org/learn">code.org Hour of Code</a> to create an introductory experience that easily beats all three environments that I described above; I mean: you can code turtle graphics by controlling a Disney(tm) character using visual programming blocks, and the character makes realistic ice-skating sounds as it moves! <p>I think if we take a step back, we will see that nothing can be concluded: it seems that the quest for the "right" answer will continue. I hope I have at least been able to shed some light on factors that contibute to successful intro-to-programming event for kids. Burak Emirhttp://www.blogger.com/profile/01651698653168460072noreply@blogger.com0tag:blogger.com,1999:blog-28320851.post-10921171403252516452015-03-28T02:36:00.000-07:002015-03-28T02:42:53.961-07:00angular2 dart nested componentsThis is a quick post to share how to do nested components in angular2 dart. It is slightly expanded version of this <a href="https://stackoverflow.com/questions/29121678/angular-2-component-inside-main-component/29125430">stackoverflow answer</a>. We start from the <a href="https://angular.io/docs/dart/latest/quickstart.html">quickstart tutorial</a> as follows: The <code>index.html</code> is unchanged: <pre><br /><html><br /> <head><br /> <title>My App</title><br /> </head><br /> <body><br /> <my-app></my-app><br /> <script type="application/dart" src="main.dart"></script><br /> <script src="packages/browser/dart.js"></script><br /> </body><br /></html><br /></pre> In our <code>main.dart</code> file, we can reference components inside our templates as along as we also reference them in the directives field: <pre><br />import 'package:angular2/angular2.dart';<br /><br />// These imports will go away soon:<br />import 'package:angular2/src/reflection/reflection.dart' show reflector;<br />import 'package:angular2/src/reflection/reflection_capabilities.dart' show ReflectionCapabilities;<br /><br />@Component(selector: 'graphics-panel')<br />@Template(inline: '<b>I am the graphics panel</b>')<br />class <b>GraphicsPanel</b> {}<br /><br />@Component(selector: 'input-panel')<br />@Template(inline: '<i>I am the input panel</i>')<br />class <b>InputPanel</b> {}<br /><br />@Component(selector: 'arrow-logo-app')<br />@Template(<br /> inline: '''<br /> <div><h1>Hello {{ name }}</h1> <br /> <graphics-panel></graphics-panel><br /> <input-panel></input-panel></div><br />''',<br /> <b>directives: const [GraphicsPanel, InputPanel]</b><br />)<br />class <b>AppComponent</b> {<br /> String name = 'Bob';<br />}<br /><br />main() {<br /> // Temporarily needed.<br /> reflector.reflectionCapabilities = new ReflectionCapabilities();<br /> <br /> bootstrap(AppComponent);<br />}<br /></pre>Burak Emirhttp://www.blogger.com/profile/01651698653168460072noreply@blogger.com0tag:blogger.com,1999:blog-28320851.post-84355894048263186532014-06-15T12:44:00.000-07:002014-06-15T13:13:23.737-07:00cflags weirdness in gypI had posted a few hints for gyp and ninja usage a while ago. I still think this combination makes a great build system, but there are also a few weird things to mention. Found the solution on <a href="https://gist.github.com/TooTallNate/1590684">this gist thread</a> and <a href="http://stackoverflow.com/questions/15171270/using-c11-with-gyp-project">this question and answer on stackoverflow</a>: if you want to specify c compiler flags or c++ compiler flags, you would to this via "xcode_settings". No comment. <pre><br />{<br /> 'make_global_settings': [<br /> ['CXX', '/usr/bin/clang++' ],<br /> # If you don't want to do this hack ...<br /> # ['CXX', '/usr/bin/clang++ -std=c++11' ],<br /> ],<br /> 'targets': [<br /> {<br /> 'target_name': 'shc',<br /> 'type': 'executable',<br /> 'sources': [<br /> # ... omitted<br /> ],<br /> # ... you would do this. It will work with the ninja generator.<br /> 'xcode_settings': {<br /> 'OTHER_CPLUSPLUSFLAGS': [<br /> '-std=c++11'<br /> ]<br /> }<br /> },<br /> ],<br />}<br /></pre>Burak Emirhttp://www.blogger.com/profile/01651698653168460072noreply@blogger.com0tag:blogger.com,1999:blog-28320851.post-61097677289866927362012-02-08T00:13:00.000-08:002012-02-08T00:15:41.126-08:00variables in gypThese are the ways to set variables you can reference in your gyp files.<br /><br />Variables are referenced using the expansion character '<', like so <code>'<(foo)'</code>.<br />This happens in <code>ExpandVariables</code>, see <a href="http://code.google.com/p/gyp/source/browse/trunk/pylib/gyp/input.py">[input.py]</a>. There is also an "expand to list" variant <code>'<@'</code>, which works if the context expects a list and if the string being expanded does not contain anything else.<br />Here is a toy <code>.gyp</code> file that references a variable <code>foo</code><br /><pre>{<br /> 'targets': [<br /> {<br /> 'target_name': 'foo_target',<br /> 'type': 'executable',<br /> 'sources': [ 'yo.c' ],<br /> 'actions': [<br /> {<br /> 'action_name': 'greet',<br /> 'action': [ 'echo', 'hello world <(foo)', ],<br /><br /> 'inputs': [],<br /> 'outputs': [ 'yo.c' ],<br />}]}]}<br /></pre><br />Now you could either add a <code>variables</code> section before <code>'targets'</code> like so<br /><br /><pre>'variables' : {<br /> 'foo': 'bar'<br /> },<br /></pre><br />However, most of the time you will want to set these separately. You can do this in two ways.<br /><br /><br /><b>1) include a separate file.</b><br />For this we would dreate a file <code>some_vars.gypi</code> that contains just this:<br /><pre>{<br /> 'variables' : {<br /> 'foo': 'bar'<br /> },<br />} <br /></pre>and add an include to the original <code>.gyp</code> file<br /><pre>'includes': [ 'some_vars.gypi' ]<br /></pre><br />Here is the result:<br /><pre>$ GYP_GENERATORS=ninja gyp --depth=0 --toplevel-dir=`pwd` \<br /> simple_action.gyp <br />$ ninja -C out/Default all <br />ninja: Entering directory `out/Default'<br />[1/3] ACTION foo_target: greet<br />hello world bar<br />[...]<br /></pre><br /><br /><b>2) Set it on the command line.</b> <br /><br />gyp offers a "define" facility with the <code>-D</code> flag, which is also used to pass definitions down to the C/C++ compiler. The code says "-D is for default", so if the variable is already defined, that value is going to be used instead of your command line value.<br /><br />Using the original <code>.gyp</code> file, we do<br /><br /><pre>$ GYP_GENERATORS=ninja gyp --depth=0 --toplevel-dir=`pwd` \<br /> simple_action.gyp <b>-Dfoo=baz</b><br />$ ninja -C out/Default all <br />ninja: Entering directory `out/Default'<br />[1/3] ACTION foo_target: greet<br />hello world baz<br /></pre><br />For an example with list expansion, check out the [<a href="http://code.google.com/p/gyp/wiki/GypLanguageSpecification#Actions">Actions example</a>] in the gyp language spec.Burak Emirhttp://www.blogger.com/profile/01651698653168460072noreply@blogger.com0tag:blogger.com,1999:blog-28320851.post-76876299675228276032012-02-03T00:02:00.000-08:002012-02-03T00:02:39.545-08:00gyp and ninja, hello worldBuilding software can be very complicated. Not writing the code, but just calling the various tools that will turn it into an executable.<br /><br />For some time, I have been working on a 20% project that brought me back to the realm of open source development in C / C++. One thing that struck me is how many different build systems exist today. In this post, I just want to share a micro-tutorial on a particular combination.<br /><br /> * <a href="http://code.google.com/p/gyp/"><b>gyp</b></a> - a meta-build system<br /><br /> * <a href="http://martine.github.com/ninja/"><b>ninja</b></a> - a tiny make-like build tool<br /><br />These are children of the chromium project. For <b>gyp</b>, we describe targets in a high-level manner, with the option of platform-specific tweaks. Then a gyp "generator" will output platform specific build files (the choice includes xcodebuild, visual studio ...). <br /><br />Given that gyp needs to have all the information necessary in order to generate build files, one does not really need a complicated lower-level build tool. Consequently, <b>ninja</b> is a minimal build tool that is lacking a lot of make's fanciness and flexibility, but enough to do the job.<br /><br />Here now a sneak peek at how to use these two: assume I have a helloworld project like this.<br /><code><br />helloworld.gyp<br />src/helloworld.cc<br /></code><br /><br />This is what the gyp file contains:<br /><br /><pre style="font-family: Andale Mono, Lucida Console, Monaco, fixed, monospace; color: #000000; background-color: #eee;font-size: 12px;border: 1px dashed #999999;line-height: 14px;padding: 5px; overflow: auto; width: 100%"><code>{<br /> 'targets': [<br /> {<br /> 'target_name': 'my_target',<br /> 'type': 'executable',<br /> 'sources': [<br /> 'src/helloworld.cc',<br /> ],<br /> },<br /> ],<br />}<br /></code></pre><br />We can now use this then generates platform specific build files. <br /><code><br />$ GYP_GENERATORS=ninja gyp helloworld.gyp --toplevel-dir=`pwd` --depth=0<br /></code><br /><br />Now it is just a matter of running ninja, which takes care of building my target<br /><code><br />$ ninja -C out/Default/ all<br />ninja: Entering directory `out/Default/'<br />[2/2] LINK my_target<br /></code><br /><br />And that was it, helloworld is built. <br /><br />ninja works for MacOS and Linux today, which is the combination I need, and maybe one day there will be some windows support. Since I have nothing to release yet, this is good enough for my purposes, YMMV.Burak Emirhttp://www.blogger.com/profile/01651698653168460072noreply@blogger.com0tag:blogger.com,1999:blog-28320851.post-60856781604049018732011-06-02T04:22:00.000-07:002011-06-11T09:43:28.967-07:00Java wildcards need tamingIn this year's PLDI, there is a session where every paper starts with the gerund "Taming ... "<br />Seems like some computer science topics are inherently wild. And what else could these papers talk about than ... Java wildcards :)<br /><br />Ok, ok, it's only two papers after all. Still, the shrewishness is real, because since Java wildcards were imposed upon programmers, we find ourselves running into weird situations where people need silly casts, or declare two variables of different types for the same value "to make the compiler happy". <br /><br />Well, if you feel like turning it around and make the compiler unhappy for once, try feeding it this :)<br /><verbatim><br />// source: <br />// Ross Tate, Alan Leung, Sorin Lerner. <br />// Taming Wildcards in Java’s Type System. in PLDI 11<br /><br />import java.util.List;<br />class C<P extends List<? super C<D>>> implements List<P> {}<br />class D implements List<C<? extends List<? super C<D>>>> {}<br /></verbatim>Burak Emirhttp://www.blogger.com/profile/01651698653168460072noreply@blogger.com0tag:blogger.com,1999:blog-28320851.post-25953457379961006062011-05-10T13:14:00.000-07:002011-05-10T13:14:36.439-07:00Finishing SOLA 2011<div class="separator" style="clear: both;">Some pics from the SOLA run last weekend, after 11 km. I wasn't in shape at all, and for some reason was so tense that I had sore muscles in the shoulders (?) but it was good fun. I guess this is the smile of a person who is glad to be still alive :)</div><div class="separator" style="clear: both; text-align: center;"><a href="http://3.bp.blogspot.com/-0mDwsHG0okw/Tcmaxf-YYkI/AAAAAAAAAP8/EdfQdHT3xG4/s1600/sola11_e06_bj_1235.jpg" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" height="320" src="http://3.bp.blogspot.com/-0mDwsHG0okw/Tcmaxf-YYkI/AAAAAAAAAP8/EdfQdHT3xG4/s320/sola11_e06_bj_1235.jpg" width="213" /></a></div><br /><div class="separator" style="clear: both; text-align: center;"><a href="http://2.bp.blogspot.com/-eGQueGQhbgo/TcmbBOpV_cI/AAAAAAAAAQA/hiB08S-e7-M/s1600/sola11_e06_bj_1237.jpg" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" height="320" src="http://2.bp.blogspot.com/-eGQueGQhbgo/TcmbBOpV_cI/AAAAAAAAAQA/hiB08S-e7-M/s320/sola11_e06_bj_1237.jpg" width="213" /></a></div>Burak Emirhttp://www.blogger.com/profile/01651698653168460072noreply@blogger.com0tag:blogger.com,1999:blog-28320851.post-22897308298408001822011-05-06T11:01:00.000-07:002011-05-06T11:01:12.789-07:00Prepared for SOLA 2011<div class="separator" style="clear: both;">Tomorrow, there is a run around Zurich, called <a href="http://portal.sola.asvz.ethz.ch/Seiten/default.aspx">SOLA</a>. Teams of 14 people. Thanks to Aaron Z, I am going to take part in it. This is the first time I'll take part in a competition - excitement!</div><div class="separator" style="clear: both;"><br /></div><br /><div class="separator" style="clear: both; text-align: center;"><a href="http://1.bp.blogspot.com/-wILxAQVnEF4/TcQxswGAsCI/AAAAAAAAAPc/jzQQolvSg-U/s1600/IMG_20110506_191650.jpg" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" height="320" src="http://1.bp.blogspot.com/-wILxAQVnEF4/TcQxswGAsCI/AAAAAAAAAPc/jzQQolvSg-U/s320/IMG_20110506_191650.jpg" width="240" /></a></div>Burak Emirhttp://www.blogger.com/profile/01651698653168460072noreply@blogger.com1