## 10.5.18

### Taking a Glance at Containers, Packages and Development

Time to dust off this blog.

Long time ago, I had worked on (yet another) open source implementation of a RPC plugin (code generation + client/server framework) for handling the "services" that are part of protobuf. I had briefly looked at existing C++ solutions, but I remember I primarily cared about finding out what it would take. I looked for ages for a suitable build tool, ultimately settling on gyp and ninja. And then, I still needed to handle dependencies.

At some point I must have lost interest, many years have passed, Bazel was meanwhile open-sourced. What (subjectively) changed more than anything else was that the world has moved on, docker containers, cloud setups are becoming more the standard than the norm, the word "DevOps" was invented, kubernetes is a thing. So people must have a way to deal with APIs. There is something called Swagger which can document APIs and also generate client and server-side code, with many languages and targeting language-specific frameworks...

## Docker for Development

Apart from the revolution that this means for operations and delivery, this led me to think that a key development story is addressed this way, namely declaring dependencies and integrating them into the build process. Indeed I found a nice article by Travis Reeder that describes exactly the advantages, along with helpful examples and command line references.

## Build tools, Modularity and Packages

Modularity is pretty much essential to serious development. I remember from university times that academic view on modularity seemed primarily from an angle of programming languages (in the old times, "separate compilation", later, through type systems). Nowadays, nobody considers providing a programming language implementation in isolation, for instance Rust has the notion of crates and modules, and what's more is that the cargo tool to manage crates is mentioned in the tutorial. npm would be another good example. In Java-land, there is a distinction between libraries on the class path and managing external dependencies using build tools like maven or gradle.

Just to state the obvious: the old view of considering modularity maybe as a low-level problem (linking and type systems) ignores the fact that a "modern" build tool has to handle package/module/crate repositories (multiple of them). On the other hand, the emergence of such repositories (plural) has its own problems, when "left-pad" package was pulled and broke many people. "npm disaster" is an entertaining search query, but it does lead to not-easily-resolved questions on the right granularity of modules/packages.

## Image or Container?

With Docker as an emerging "modularity mechanism", it is very interesting to observe that in conversation (or documentation) people confuse docker image with a docker container. If you are confused, this post will get you on the right path again.

There is a parallel between a "module" as something that can be built and has an interface, and "module" as something that is linked and part of a running system. Since a container is a whole system in itself, networking is part of its interface. When building a larger system from containers as building blocks, the problem of connecting these requires tackling "low-level" configuration of an entirely different kind (see Kubernetes docs) than what "module linking" meant in the 80s. Still, I'd consider this a parallel.

## Where are the best practices?

A quick search reveals that indeed there are 53k+ projects have Dockerfiles, including e.g. tensorflow, which explicitly mentions deployment and development Docker images. So while initially, I thought I had not slept through much essential changes, it seems on the contrary that things are happening at an entirely different level. What conventions should one follow, what are the pitfalls?

Fortunately for me, there are smart people good at posting articles about these things, so I can appreciate the problems and tentative solutions/best practices mentioned in Top 5 Kubernetes Best Practices. Hat-tip to the "Make sure your microservices aren’t too micro" advice.

## D.C. al Coda

How can there be an end to this? Containerization seems to be a perpetual journey. At this point, I would have liked to offer tips based a "typical" example of a development project and how one would use docker to set up. However, I am not very active in terms of open source development, and the one project I want to get back to most isn't a developer tool at all. Fortunately, I believe best practices for "docker for development" aren't going to be any different from the best practices for using docker in general.

There is an interesting point where, in order to build a container image, one has to still solve the problem of gathering the dependencies. So the problem of gathering dependencies at build time has only been shifted on level (up? down?), though the constraints are very different: an organization can find easy ways to make docker images available to their developers, and only the team that takes care of providing the docker image has to gather the dependencies. This appears to me much more sane than having a development workflow that makes connections to external repositories on the developers box at build time.

## 19.6.15

### Web Components and Dart Frameworks

Recently, I visited the scientific software and data management (SSDM) group at ETH to give a talk about web components and dart frameworks. Here is a summary.

## Looking back: a brief History of Web Apps

We live in interesting times: there are many frameworks, languages out there, and in order to understand the context and why things are how they are, it helps to look at the history. The web as a platform for applications has evolved in an erratic process, which I will divide into three broad phases:
• the "Middle ages": content generated on the server-side, Java born with the promise of applets, Javascript launched as a tool for web designers (read: non-programmers), browser wars [1]. Client-side coding was a matter of achieving visual effects, client-server communication happened through forms and page-loads.
• the "Renaissance": asynchronous Javascript[2] becomes ubiquitous, client-server communication is now decoupled from forms and page-loads. Rich client-side apps like GMail or Google Maps demonstrate its use.
• "Modern History": high-performance layout and javascript engines are the norm. HTML5 delivers a reinterpretation of browser documents and APIs, tailored to web applications development. Mobile devices get high resolution screens and useful browsers.

HTML5 continues to evolve, and not all HTML5 technologies are not supported on every browser (so developers check on caniuse.com). The idea of a Polyfill is born: adapter code that simulates functionality that does not exist natively in the browser.

## Web Components

With application development on the web becoming the norm, the traditional questions of software engineering arise: how to approach development of large systems? how to encapsulate and reuse? Web components [3,4] offer an answer.

Web Components are four technologies that together achieve reusable units of behavior:

• custom elements
• templates
• HTML imports

These technologies are being standardized and implemented in a few browser. Even when not available natively, there is the possibility of using polyfills.

As with any web technology, there are good documentation resources available on the web that explain these. In the following, I'll just include a simple example for completeness.

### Defining a new custom element

Say we want to write a bug-tracking system that displays a number of bug entries. We'd like to treat the entries as we would treat other HTML elements (note the fancy new HTML5 syntax):

  <bug-entry bug-id=1e4f bug-status=open>


This is possible with custom elements, we'd want to register the element like so:

// Create a new object based of the HTMLElement prototype.
var BugEntryProto = Object.create(HTMLElement.prototype);

// Set up the element.
BugEntryProto.createdCallback = function() {
...render bug entry, e.g. with img, click handler/link
}
// Register the new element.
var BugEntry = document.registerElement('bug-entry', {
prototype: BugEntryProto
});


### Filling the element with content

Most of the time, custom elements will actually correspond to some other markup. With HTML templates, it is possible to include template markup in documents and instantiate it.
<template id=”bug-entry-template”>
<tr><td><img class=”bug-image”></td>
<td><div class=”bug-title”></td></tr>
</template>

BugEntryProto.createdCallback = function() {
…
var template = document.querySelector('#bug-entry-template');
var clone = document.importNode(template.content, true);
…
}


### Specifying layout and style of the content

In HTML, layout and style is specified using CSS, however the specification is global. In order to have a self-contained, encapsulated unit of code, we need some mechanism that enables reuse without worrying about style. This is what is achieved with the shadow DOM: content can be added to the DOM in a way that keeps it separate from the rest of the DOM.
BugEntryProto.createdCallback = function() {

var template = document.querySelector('#bug-entry-template');

var clone = document.importNode(template.content, true);
…
root.appendChild(clone);
}



### Importing the new component

The sections above have defined a component. Now we would like to be able to use the component. HTML imports provide a way to import components easily:
<link rel="import" href="bug-entry.html">


## Dart Frameworks

### Polymer

The Polymer.js library provides many things:
• Polyfills to get web components in browsers that don't support them
• Tools like vulcanize that translate away HTML imports for the purposes of deployment

The polymer.dart library is a wrapper around polymer.js that makes the polyfill as well as the components available in dart. It also offers an easy, integrated syntax which is mainly possible because of reflection. Instead of using vulcanize, users of polymer.dart rely on the polymer transformer which translates away the use of reflection and also takes care of combining all the imported components into one unit of deployment.

There is a bottom-up development approach suggested by the availability of polymer: start building basic components and assemble them into every larger ones until you have an application. It is important to note that polymer is not an application development framework: things like logical pages, views, routing are not standardized, but can be provided by components.

### Angular, Angular.dart and Angular2

Angular existed before web components and polymer. It was always a framework for application development, so provides facilities for pages, views, routing. The dart version of angular is significantly different from the js version, again because the use of reflection permits a nice style that uses annotations. Angular does not work seamlessly with web components defined in polymer, although it is possible to make the two things work with a bit of glue code.

The newer version, angular2, is backwards incompatible with angular (though concepts are similar). It aims to provide seamless integration of web components (as defined by the standard). It is noteworthy that annotations are advocated as the primary means to set up angular2 components.

## Conclusion

Web development has become even more interesting with the promise of reusable, encapsulated components. Work on these is still in progress, but I for one am looking forward to more structured web applications that can draw from a large set of reusable components.

## References

[1] Brendan Eich interview in InfoWorld, 2008
[2] https://en.wikipedia.org/wiki/Ajax_(programming)
[3] Web Components description in Mozilla Developer Network
[4] https://en.wikipedia.org/wiki/Web_Components
[5] Are we componentized yet?

## 31.3.15

### Educational Programming Environments for Children

If one considers programming a passion, then one can endlessly search for the "essence" of programming. Sooner or later, one faces the question: how can something as abstract as programming be taught to children? What will we teach our children?

This is a broad topic, so I will focus on describing some programming environments that I saw being used at educational events at the Google Zurich office. The usual disclaimer applies: the opinions herein reflect only my personal point of view, not necessarily that of Google.

## Background and Setup

At the Google Zurich office, we occasionally organize events that teach (or rather: expose) programming to children. The occasion / program and audience were slightly different each time but what is common is the goals of supporting STEM education, closing the gender gap in CS, and at one of the events also giving kids an impression of what their parents are doing ... basically showing them that this stuff is fun. Note that this wasn't a controlled experiment of any sort - my comparison of the environments we used is more an afterthought.

I have participated in a few of these as facilitator and seen how three educational programming environments were applied in practice and I want to share a quick overview in the hope that it is useful for future events of this kind. These were:

• Logo,
• Agent Cubes, and
• Scratch.

All three environments were used in their online variant in the context of educational events organized for groups of kids: 30-40 Chromebooks and a bunch of volunteers to help in case of questions. Many times, a single child would get a machine of their own, but we also did groups of 2-3. Sometimes we would spend some time briefing the volunteers, but this was pretty minimal.

## Logo

This event was a 2h workshop and took place in November 2012. I was the main facilitator for it, and had previously written a logo interpreter ArrowLogo in my 20% time (*). I have also used it to discuss programming in a 30 minute session with a group of school children visiting the office in January 2015.

(*) I used to code in logo when I was a kid myself. There are other, Java-based Logo implementations available e.g. xlogo, but note that using these would require installation of a desktop app, or enabling Java plugin in a web browser. Installing desktop apps is often not an option (e.g. Chromebooks won't allow this), and enabling the Java plugins often turned out to be a security hazard in the past.

Logo was invented by Seymour Papert and others in 1967. The design of Logo is deeply rooted in constructivist learning and his Papert's book "Mindstorms: Children, Computers, and Powerful Ideas" is a very accessible resource that explains well how its creator viewed it as a tool not to teach programming but to approach education in general.

The summary is this: children are exploring the environment, essentially teaching themselves. The "teaching yourself" part will resonate with every programmer. LOGO achieves this by providing a facility for turtle graphics: kids need to instruct the turtle where to move in order to create paintings that way. This is simple procedural (imperative) programming.

For practical purposes, in a short workshop one will usually want to provide exercises to give some structure to the session; I was able to use LOGO teaching resources from ABZ, a group at ETH Zurich that is promoting informatics education for young kids. For beginners, it is very easy to come up with turtle graphics exercices: you can draw triangles, boxes, houses and move on to circles, flowers, filled box, colored paintings.

Something notable about Logo is that the childrens' activity is very close to what programmers do: manipulate source code in textual representation, run, even "debug". It also means that some basic typing skills and being able to navigate a very simple graphical user interfaces with a mouse are requirements.

Learning the ArrowLogo environment is very easy because it is very basic: there is a graphics panel, some place to write the code, and some commands to control what is happening, and that is it. The turtle graphics provides immediate feedback, and it is very easy to start over, giving children motivation to sort things out by themselves. After a short lead time, children seemed to be able to use the environment fine.

On the other hand, there are no fancy images or sounds, and building interactive things or games was out of scope. Also, ArrowLogo does not provide a way to save programs or take them home; since most programs written in this short time where basic and easy to reproduce, this did not seem to be a shortcoming.

## Scratch

This even took place in March 2015, the audience was young girls (ages 10-12) and each of them had their own chromebook. They accessed scratch.mit.edu with profiles that were created ahead of time (some confusion arose since some tried to treat their scratch profile intro as a (Google-) account).

Scratch is developed by the MIT media lab, which is a successor to the group that created Logo. Scratch offers a visual programming editor: kids can create and customize actors and control them through scripts. The scripts are not coded as textual source code, but by combining blocks by using drag-and-drop.

The user interface is complex. There is almost no typing needed, except for entering values; however a good command of a mouse and a good understanding of graphical user interface is essential and this was clearly a problem for some of the participants.

Scratch is fascinating for kids, because it enables them to do game or animation development. However, it is also a moderately complex environment to learn, and if one is not ready to deal with this complexity, then the insights into programming that children can get are quite limited.

I showed kids how they could add keyboard control for their actors or automate their movement, but I feel that at least with some part of the childen, there was no understanding for what is happening and why, and no engagement. With such a complex environment, it is easy to fall back into an "instructionist" pattern and tell kids every single step they need to take and ask them to mechanically repeat it. Alternatively, one can accept that some of the participants will be distracted by looking at different shapes, being obsessed with drawing things or playing sounds.

Kids could save their work, take their profile details home and continue. Though many said they would like to continue, I have some doubts.

## Agent Cubes

Alexander Repenning visited Google to run an "Hour of Code" workshop in May 2014, using his software Agent Cubes Online.

Just as in scratch, the user can add agents and define behavior for them using rules. The focus is clearly on game design, and it is really easy to clone agents (e.g. think of the many trucks in the frogger game, all running the same code). The user interface for Agent Cubes Online however is even harder to get used to than scratch. There are too many buttons, most of them are too small. Like in scratch, there is a multitude of things that can be done, like draw your own agents.

The 'hour of code' was very structured: small demonstrations by the teacher would alternate with periods were kids were free to replicate the steps. Clearly, we have left any constructivist pretensions behind since the purpose is game design at all cost.

I think the level of engagement was clearly there, but it is hard to tell whether children would be able to "make it their own" later. Certainly, some kids enjoyed playing with the thing, but I don't think the majority of them thought of themselves as programmers or would return to game development.

Kids were able to save their work to their profile and could continue their work later. This required registering with their email address at the end of the workshop.

### Conclusion: use code.org

My personal "conclusions" from all these were the following:
• for kids, visual programming can be quite helpful, because it is appealing and easier to use. I am looking forward to seeing such a session with a touch screen interfaces
• there seems to be a wide spectrum between turtle graphics and game development. "Flashiness" seems to be a factor in getting some kids engaged; or as the game development motto goes "[audiovisual] content is king"
• ArrowLogo is seriously lacking facilities for developing interactive things or animations.

While doing some research, I found there is a more convincing solution in this design space. The Google-developed blockly library for visual programming has been used at the code.org Hour of Code to create an introductory experience that easily beats all three environments that I described above; I mean: you can code turtle graphics by controlling a Disney(tm) character using visual programming blocks, and the character makes realistic ice-skating sounds as it moves!

I think if we take a step back, we will see that nothing can be concluded: it seems that the quest for the "right" answer will continue. I hope I have at least been able to shed some light on factors that contibute to successful intro-to-programming event for kids.

## 28.3.15

### angular2 dart nested components

This is a quick post to share how to do nested components in angular2 dart. It is slightly expanded version of this stackoverflow answer. We start from the quickstart tutorial as follows: The index.html is unchanged:
<html>
<title>My App</title>
<body>
<my-app></my-app>
<script type="application/dart" src="main.dart"></script>
<script src="packages/browser/dart.js"></script>
</body>
</html>

In our main.dart file, we can reference components inside our templates as along as we also reference them in the directives field:
import 'package:angular2/angular2.dart';

// These imports will go away soon:
import 'package:angular2/src/reflection/reflection.dart' show reflector;
import 'package:angular2/src/reflection/reflection_capabilities.dart' show ReflectionCapabilities;

@Component(selector: 'graphics-panel')
@Template(inline: '<b>I am the graphics panel</b>')
class GraphicsPanel {}

@Component(selector: 'input-panel')
@Template(inline: '<i>I am the input panel</i>')
class InputPanel {}

@Component(selector: 'arrow-logo-app')
@Template(
inline: '''
<div><h1>Hello {{ name }}</h1>
<graphics-panel></graphics-panel>
<input-panel></input-panel></div>
''',
directives: const [GraphicsPanel, InputPanel]
)
class AppComponent {
String name = 'Bob';
}

main() {
// Temporarily needed.
reflector.reflectionCapabilities = new ReflectionCapabilities();

bootstrap(AppComponent);
}


## 15.6.14

### cflags weirdness in gyp

I had posted a few hints for gyp and ninja usage a while ago. I still think this combination makes a great build system, but there are also a few weird things to mention. Found the solution on this gist thread and this question and answer on stackoverflow: if you want to specify c compiler flags or c++ compiler flags, you would to this via "xcode_settings". No comment.
{
'make_global_settings': [
['CXX', '/usr/bin/clang++' ],
# If you don't want to do this hack ...
# ['CXX', '/usr/bin/clang++ -std=c++11' ],
],
'targets': [
{
'target_name': 'shc',
'type': 'executable',
'sources': [
# ... omitted
],
# ... you would do this. It will work with the ninja generator.
'xcode_settings': {
'OTHER_CPLUSPLUSFLAGS': [
'-std=c++11'
]
}
},
],
}


## 8.2.12

### variables in gyp

These are the ways to set variables you can reference in your gyp files.

Variables are referenced using the expansion character '<', like so '<(foo)'.
This happens in ExpandVariables, see [input.py]. There is also an "expand to list" variant '<@', which works if the context expects a list and if the string being expanded does not contain anything else.
Here is a toy .gyp file that references a variable foo
{
'targets': [
{
'target_name': 'foo_target',
'type': 'executable',
'sources': [ 'yo.c' ],
'actions': [
{
'action_name': 'greet',
'action': [ 'echo', 'hello world <(foo)', ],

'inputs': [],
'outputs': [ 'yo.c' ],
}]}]}


Now you could either add a variables section before 'targets' like so

'variables' : {
'foo': 'bar'
},


However, most of the time you will want to set these separately. You can do this in two ways.

1) include a separate file.
For this we would dreate a file some_vars.gypi that contains just this:
{
'variables' : {
'foo': 'bar'
},
}

and add an include to the original .gyp file
'includes': [ 'some_vars.gypi' ]


Here is the result:
$GYP_GENERATORS=ninja gyp --depth=0 --toplevel-dir=pwd \ simple_action.gyp$ ninja -C out/Default all
ninja: Entering directory out/Default'
[1/3] ACTION foo_target: greet
hello world bar
[...]


2) Set it on the command line.

gyp offers a "define" facility with the -D flag, which is also used to pass definitions down to the C/C++ compiler. The code says "-D is for default", so if the variable is already defined, that value is going to be used instead of your command line value.

Using the original .gyp file, we do

$GYP_GENERATORS=ninja gyp --depth=0 --toplevel-dir=pwd \ simple_action.gyp -Dfoo=baz$ ninja -C out/Default all
ninja: Entering directory out/Default'
[1/3] ACTION foo_target: greet
hello world baz


For an example with list expansion, check out the [Actions example] in the gyp language spec.

## 3.2.12

### gyp and ninja, hello world

Building software can be very complicated. Not writing the code, but just calling the various tools that will turn it into an executable.

For some time, I have been working on a 20% project that brought me back to the realm of open source development in C / C++. One thing that struck me is how many different build systems exist today. In this post, I just want to share a micro-tutorial on a particular combination.

* gyp - a meta-build system

* ninja - a tiny make-like build tool

These are children of the chromium project. For gyp, we describe targets in a high-level manner, with the option of platform-specific tweaks. Then a gyp "generator" will output platform specific build files (the choice includes xcodebuild, visual studio ...).

Given that gyp needs to have all the information necessary in order to generate build files, one does not really need a complicated lower-level build tool. Consequently, ninja is a minimal build tool that is lacking a lot of make's fanciness and flexibility, but enough to do the job.

Here now a sneak peek at how to use these two: assume I have a helloworld project like this.
 helloworld.gyp src/helloworld.cc 

This is what the gyp file contains:

{
'targets': [
{
'target_name': 'my_target',
'type': 'executable',
'sources': [
'src/helloworld.cc',
],
},
],
}


We can now use this then generates platform specific build files.
 $GYP_GENERATORS=ninja gyp helloworld.gyp --toplevel-dir=pwd --depth=0  Now it is just a matter of running ninja, which takes care of building my target $ ninja -C out/Default/ all ninja: Entering directory out/Default/' [2/2] LINK my_target `

And that was it, helloworld is built.

ninja works for MacOS and Linux today, which is the combination I need, and maybe one day there will be some windows support. Since I have nothing to release yet, this is good enough for my purposes, YMMV.