Thursday, 23 June 2016

Random Idea #1: P2P Postal Service

Cryptographically secure P2P postal service.

This originally came out of a random idea I had while heading home today: cryptographically authenticated location tracking. I thought this would be a required technology for implementing a secure P2P postal service. It would mostly consist of a way to generate a proof that you were at a specific location (+/- specified accuracy) at a specific time. It could either be built on top of a central system like GPS, or, for maximum liberal sharing economy status on top of a “mesh-based, cryptographically secure (and private), web of trust (WoT) authenticated” location tracking service1.

While trying to sketch out an actual implementation I realised that you wouldn’t actually need the authenticated location tracking. The package itself already has to do enough crypto and network work to ensure it’s delivered reliably that integrating the location tracking into the package itself would be a simple enough step and mean it can just trust itself (assuming the actual tracking system is secure).

The other major piece required would be a smart contract system like Ethereum2 to handle the contract and payments side. You would just need to:

  • grab one of the proven standard contracts3
  • package your item with a postal tracker
  • mutually authenticate the tracker and contract to each other
  • seed the contract with the final location and funds to pay deliverers
  • leave it somewhere it’s likely to get picked up

The postal tracker would be a very simple IoT module containing a few chips/cores connected together with a very basic firmware:

  • secure cryptoprocessor4
  • tamper detection
  • NFC I/O
  • BTLE beacon
  • location tracker
  • mesh network access module

The BTLE beacon will advertise that this is a package + the associated contracts public key. If you’re part of the delivery network then you would be running an application listening for these beacons, if the delivery location is in the direction you’re heading and the price meets your minimum5 then the application would pop up a notification informing you of it. You’d then go to the package and interface with it via NFC to prove you’re actually there before taking it. You carry it for a while to either its final destination6 or somewhere that your client has determined is the optimal location for you to leave it. There’d be a brief conversation between the package, your client and the smart contract, and your account would be credited with whatever portion of the delivery fee you’d earned.

Obviously there are many places in here that could be smarter. One big one is the initial pickup, rather than just having a flat % fee for how much closer you get the package there could be a much larger negotiation between your client and the contract for the package. Depending on how much you trust your client it could automatically negotiate a complex contract with many different potential fees based on speed of delivery, once negotiations with the contract are done they would instantiate a new smart contract encoding these potential payments, the package’s contract would fund this new contract and it would be let loose to oversee this part of the package’s journey. This could also be done as some sort of auction system, which would require the contract to have some way to decide on how to value speed of delivery versus cost (with any remaining funds being sent back to the sender at the end).

Actually, this talk of negotiation made me realise that the controlling entity of the package would not have to be a contract. In the simple case it probably makes sense to just have a single contract controlling throughout the lifetime of the package, but if you have complicated negotiation and reverse auctions to run you probably just want a self-governing agent running it. If you don’t care about the package being tied to (one of) your public identity(ies) then just (one of) your general purpose public agent(s), otherwise a specialised agent instantiated and funded for this specific package. This agent would then be negotiate and instantiate smart contracts with the deliverers for the purpose of getting this package to its destination.

Another place to add more smarts would be a reputation system, this could be built into the smart contract governing payment, as well as paying out currency it would payout reputation as well. Reputation would have to be more complicated than a simple balance though, for the purposes of negotiation it would have to differentiate between things like actual failure to deliver, consistent late delivery and occasional damage during transport.

The reputation system could include things such as having the deliverer create a bond contract before being able to pickup their first few packages. As their reputation improves the agents in charge of the packages will start lowering the bond, and even potentially for very high priority packages being delivered by very high reputation deliverers offer a partial pre-payment as funds to perform the delivery with (although that would be better done by the deliverers client securing very good terms on a loan by showing the contract for delivery + reputation to an automated loan broker).

</random brain dump>7

  1. Something like:

    • a set of stable nodes that you trust that don’t move
    • a mesh network spreading proofs of relative distances
    • strong confirmations from mobile devices within a few mesh hops within your WoT
    • weak confirmations from mobile devices within a few mesh hops that are close to your WoT, some low number of WoT hops should be provable, preferably without leaking knowledge of the identities along the link, just that there is a link of x hops

    Spreading all those proofs seems like it might be a bit data-heavy, especially the proof of devices close to your WoT.

  2. Or an actually secure/provable successor to it.

  3. So deliverers won’t need to validate that your custom contract will actually pay them.

  4. I really just want to say TPM here, but that’s a standard rather than a generic term, the closest general category seems to be “Secure cryptoprocessor”, but that doesn’t have any TLA for it.

    This seems common to me in crypto, DSA (Digital Signature Algorithm) is the first acronym I thought of when wanting to refer to the category of digital signatures, but it is also a standard, the best TLA I could think of for the category of algorithms that implement digital signatures is DSS (Digital Signature Scheme), but what makes it a scheme rather than an algorithm?

  5. This would be a little more complicated that just matching the minimum, it would have to take into account a lot of variables like how much of the packages journey you can actually do, how far out of your way you would have to go to maximise your payment/hour etc. Luckily that’s purely a client feature and nothing to do with the basic system.

  6. Probably just dropping it at a location near its recipient, they would then be notified by the contract that it’s ready for pickup. For urgent packages they would probably have an extra bonus for in-person delivery since finding actual buildings and the correct person to give it to is much more difficult than just dropping it somewhere within a 500 m2 circle.

  7. Major relevant memes infesting my head:

    • news about the DAO being hacked on HN
    • secure cryptoprocessors, BTLE beacons and sensor (not quite mesh) networks for work
    • currently reading the second story in METAtropolis which mentioned p2p postal service
    • just finished reading Lady of Mazes which (like a lot of Karl Schroeder books) involves a lot of ideas about self-organizing distributed societies and cryptography, (Karl Schroeder, Hannu Rajaniemi and about half of Charles Stross should be required reading for anyone working on any p2p crypto system intended to be integrated into everyday society).

Monday, 14 July 2014

TypeScript and npm modules

The new big thing at work is TypeScript1. I like the idea of bringing more potential for compile time checking and better tooling to JavaScript, and with all these new fangled features like type inference it should hopefully not bring too much of overhead to the code.

Recently I’ve also been looking at doing some better integrationy stuff at home. We got a new flatmate who brought along a Chromecast and have since switched to using Plex as our main media server. The Plex web UI and mobile apps both allow streaming videos/music directly from Plex to the Chromecast. One integration area this is currently lacking compared to the old setup was being able to see which tv/movies are available and start them playing in XBMC directly from the trakt web site2.

After a quick look at the Plex API I decided that this shouldn’t be too hard to implement myself3, and while I’m at it I may as well learn about writing code in TypeScript. Learning the infrastructural areas around TypeScript will be especially important as getting them right the first time once my team actually starts using TypeScript at work will save a lot of hassle in the future. Looking round at tools I’d heard mentioned by co-workers I thought it should be easy enough to write a Plex API client library in TypeScript, using npm modules like rest as a base, pulling in .d.ts4 files from the DefinitelyTyped repository using tsd and publishing the client as an npm module including compiled .js files along with the .d.ts files so it can be consumed by other TypeScript libraries easily. With this client in place creating the small amount of UI for the trakt integration would be a cinch.

Once I actually started to build this base api client I quickly started running into issues that made it not very nice to work with. The very first one was the lack of and quality of .d.ts files available on DefinitelyTyped, this is sort of expected since TypeScript is a relatively new language and is easy enough to fix by writing these .d.ts files and contributing them back. The second issue was how to actually bring these .d.ts files into the client. The recommended method is to add a reference line to your code to pull in these definitions e.g. if you have some source file lib/client.ts and have installed your .d.ts files into the default typings folder with a top-level tsd.d.ts to reference them you would need to add

/// <reference path="../typings/tsd.d.ts" />

to lib/client.ts. Straight away this seems wrong to me, why are you having to manually include type definitions when doing something as simple as loading a module you use? Why is this not taking care of automatically by the compiler?

It gets even worse once you look at using a node module written in TypeScript by another library written in TypeScript. For this example lets say I’d finished the Plex API client with the following source file structure5:

├── index.ts
├── lib
│   └── client.ts
├── package.json
├── tsd.json
└── typings
    ├── rest
    │   └── rest.d.ts
    └── tsd.d.ts

Prior to publishing this would be compiled to .js files by tsc, this will allow consumption by normal JavaScript libraries. At the same time the .d.ts files can be generated and added to the module to provide the necessary type annotations for any TypeScript libraries consuming this module. This would result in the following module file structure:

├── index.d.ts
├── index.js
├── lib
│   ├── client.d.ts
│   └── client.js
├── package.json
└── typings
    ├── rest
    │   └── rest.d.ts
    └── tsd.d.ts

Now, any normal JavaScript consumers of this library can simply

var plexApi = require('plex-api');

as usual. For any TypeScript consumers however it seems like they’d need to

/// <reference path="../node_modules/plex-api/index.d.ts" />
import plexApi = require('plex-api');

Except that won’t actually work. The .d.ts files generated by tsc are source files that define an external module (§11.1). These sorts of module definitions only work when the compiler’s pull type resolution resolves an external module reference to the file directly6. This doesn’t occur when they’re pulled in by a reference directive, although I can find very little information on what exactly should happen when a reference directive is encountered, §11.1.1 contains the only reference to it I can find and simply says

A comment of the form /// <reference path="…"/> adds a dependency on the source file specified in the path argument. The path is resolved relative to the directory of the containing source file.

Anyway, the only sorts of .d.ts files that seem to work well in reference directives are ones that contain Ambient External Module Declarations (§12.1.6). These declarations do not cause the source file to be considered an external module and instead allow it to be a part of the global module (§11.1). When the compiler later attempts to resolve a top-level external module name (like when resolving import plexApi = require('plex-api');) then if there is an ambient external module declaration that matches it will be preferentially returned before attempting to find a file defining the module (§11.2.1).

This works well for definitions pulled in by DefinitelyTyped, since they’re being handwritten for the modules it’s easy enough to write them as ambient external module declarations7. For modules generated from .ts files though this is a problem. It’s a non-trivial conversion to take all the output .d.ts files and generate them correctly as ambient external module declarations.

There is still another issue however. Even if you convert the definition files into ambient external module definitions you start running into dependency issues. Lets say you are writing another module that depends on the fixed up plex-api module (which also happens to minify all its JavaScript into index.js). This module also has to access another ReST service, but it’s such a small access that they decided not to create a separate client library for this service and instead write it directly into the module. So as well as depending on the plex-api module they depend directly on the rest module. The file layout for this situation would be something like:

├── node_modules
│   ├── plex-api
│   │   ├── index.d.ts
│   │   ├── index.js
│   │   ├── node_modules
│   │   │   └── rest
│   │   │       ├── index.js
│   │   │       └── package.json
│   │   ├── package.json
│   │   └── typings
│   │       ├── rest
│   │       │   └── rest.d.ts
│   │       └── tsd.d.ts
│   └── rest
│       ├── index.js
│       └── package.json
├── index.ts
└── typings
    ├── rest
    │   └── rest.d.ts
    └── tsd.d.ts

With file contents:


declare module "rest" {
  // Module declarations


/// <reference path="./typings/tsd.d.ts" />
declare module "plex-api" {
  // Module declarations


/// <reference path="./typings/tsd.d.ts" />
/// <reference path="./node_modules/plex-api/index.d.ts" />
// All the cool codes.

The issue with this is that the two modules will be referencing different files that both declare the ambient external module “rest”. As soon as the second one is loaded the compiler will emit a duplicate declaration warning. If instead the plex-api module were to not distribute its dependency typings you’d get the opposite issue, any library that uses plex-api would have to include all the dependencies in its own typings folder.

My next post should have a proposed solution to this issue. For a preview take a look at this test github repo and this TypeScript pull request.

  1. Well, at least one of the big new things, and it’s not really that big or that new, I guess it’s really just a thing.

  2. Provided by the XBMC Trakt.TV Remote Chrome extension.

  3. Other than a complete lack of documentation for the Plex API.

  4. TypeScript Definition files, these allow using node modules written in pure JavaScript while still retaining type information.

  5. Pretend there’s a whole tree of files under lib, just putting a single file there makes this example easier. Also likely to be many more typings than just the rest one.

  6. Don’t ask me to explain what this means, it’s just based on the source file and method names that needed changing to workaround this limitation. I might have more details by the time I post about the solution.

  7. Disappointingly there’s a lot of definitions available that don’t follow this standard however…

Sunday, 4 August 2013

Cocktail Maker: Software Architecture

As mentioned earlier the hardware side of the cocktail maker will be controlled by a Raspberry Pi with a user interface running on an Android tablet. The Raspberry Pi will make interacting with the valves simple via it’s builtin GPIO ports while the Android interface will mean we can have a nice touchscreen based client.

For ease of development the two devices will likely communicate with a RESTlike API. Luckily enough I’ve recently been working on a major plan at work involving a substantial bit of work to do with RESTifying our current API. In both cases this will probably be the popular conception of REST, ignoring the Hypertext As The Engine Of Application State (HATEOAS) constraint.

There is actually two mostly distinct sets of resources that the server will have to provide for the client. The majority of the resources will be a generic collection of cocktails with pictures, descriptions, recipes etc. Then there will also be the resources specific to the cocktail maker, list of available ingredients to allow filtering the cocktails, calibration information1, current processing state, starting and canceling the current task. The first of these sets is also very widely applicable, anyone and everyone2 could be interested, including other people building automated cocktail makers.

For that reason I’ve decided to separate the cocktail database from the main cocktail maker server. I haven’t been able to find any appropriate databases existing so I’ve taken it upon myself to create the first online crowd-sourced cocktail database. I’m hoping to have a very basic version up within a few months, the work on it will obviously be sporadic seeing as it is a part-time after work project. Actual time frame will probably depend a lot on how much stuff I get up to in America, if I find there’s not really that much to do in Austin I may end up spending a large part of most evenings working on it.3

As part of developing this database I do want to experiment with HATEOAS, especially HATEOAS as implemented via JSON. There are some really good examples of how to implement it in XML using the rel attribute for metadata about the resource. There are also a few libraries that do attempt to implement it with JSON, but just from what little I’ve seen of the libraries something seems off with them. Once I get some time to research the current libraries, I’ll try and write a post about what it is that bugs me about them and how I’m going to try and avoid the issues in my implementation.

In the meantime we’ll probably have a simple hardcoded list of cocktails to use. Worst case we’ll just be stuck drinking Tom Collins’ continuously.4

  1. Especially if the pipes flow rate is affected by viscosity, sugar syrup can be a few hundred to over a thousand times as viscous as water.

  2. Above the legal age for alcohol consumption in their country.

  3. What’s the chance of that in The Live Music Capital of the World.

  4. I could think of much worse things in life than a nigh unlimited supply of automatically prepared Tom Collins’ *Cough*kscrew .

Thursday, 1 August 2013

Cocktail Maker: Overview

So far these are the components we have:

tubing small valves small

Ten metres of 10mm silicon piping (food grade1) and fifteen solenoid valves.

The current plan is to have all the spirits and mixers in upside down bottles above a bench, tubes will come down from these bottles, pass through the valves then all come together in some form of tap above where you place your glass. The valves will be controlled by a Raspberry Pi with an Android tablet hooked up to allow selection of which drink you want.

sketch small

It will of course look much fancier than this sketch, personally I’m thinking some nice dark stained wood with brass fixtures, some form of laser cut drip tray grate with an icon representing the limitless potential of this device, and of course only top shelf liquors allowed.

Designing it this way means there is very little mechanical design that needs doing, really the most complicated part will be ensuring the bottles are held securely while being easy to replace as they get empty.2 Which is all for the best, while we have 3 mechatronics engineers on the project only one has done any hardware work since leaving uni.

The biggest disadvantage of this is the limitation of only working with liquids. It would be amazing if we could have ice and garnishes added automatically, but (at least for the first iteration) we are going to require the user to manually add these.

Since we won’t have much mechanical design to do we can instead spend longer on the software side. If we get a really nice client designed for the Android tablet, complete with all the background info about cocktails and the ability for the user to adjust the strength on the fly, then this will be re-usable if (once) we build a version 2 with more capabilities.

  1. Supposedly. Not really sure how much you can trust that sort of claim from cheap online stores.

  2. Although it would be nice if it detected the bottles getting empty and warned the user of this, or even hook it up to an online store and get it to order new bottles automatically.

Tuesday, 30 July 2013

Opifex Vocatuum

So I’ve finally started another project (along with Daniel Bentall, Simon Richards, and Joshua Jordan) that I’ll attempt to use to kickstart me actually blogging.1 Bet you can’t guess what it is from the title.2

A cocktail maker! I’ve been wanting to make one of these for years, ever since I saw one posted somewhere on the interwebs; I want to say it was on reddit, but I think this was before I’d even discovered reddit. The only example of a similar device that I can find at the moment is The Inebriator, but that’s a lot more commercialised than what we’re planning on.

I’m currently planning on posting every 2 or 3 days about this, hopefully it will promote me to actually keep on working on this steadily. The posts will generally be quite short,3 both so I don’t get too distracted when trying to finish one and so I have enough material to keep posting with regularly.4 Coming up next will be a few posts detailing a broad overview of the system, followed by the initial section that I’m planning to work on. 5

  1. Third time’s the charm, right?5

  2. Declining those Latin nouns took far longer than it should, I really need to learn Latin properly someday.

  3. Although probably longer than this one.

  4. How long will my posting spree last this time? Can I beat my current record of one post before dropping a topic.

  5. Shout out to Lauren Wayne for an easy way to do footnoting in Blogger, I’m going to be using this all the time from now. Adapted it a bit to use an ordered list for nicer formatting of larger footnotes, but all the essentials are stolen from there. I don’t think I’m going to last very long doing this by hand though, I should try and find some way to get Markdown to do it for me.

Thursday, 12 July 2012

A history lesson

YASL (Yet Another Systems Language) is a language I started quite a while ago, its had some issues actually getting started (my eternal procrastination being a major one) and has been through quite a few transitions from its original design. To try and increase my motivation and actually get some workable designs I’ve decided to start blogging about the design process.

Because of the way I work this is likely to be a very rambling series of posts, some of my initial thoughts on topics are the polymorphic aspects (including compile and runtime polymorphism), AST macro based design and the work I’ve done on getting a very bare-bones REPL using LLVM and a custom lexer and parser.

First though I should describe the top level design thoughts behind the language, these are very much open to change as you’ll see from where they were in the original design through to now.

Ancient history

I first started thinking about YASL around the beginning of the year, according to GitHub Jan 16 was when I committed the first version of my design doc, then Jan 17 I committed some example code to work towards. At this point the main goal of the language was for bare-metal embedded development, a nicer language to use for my home automation projects. This meant that initially one of the aims was to make the language entirely stack-based, when you’re working with a massive 128 bytes of SRAM then you really can’t afford to spend any on heap management. One of the main decisions from this time to survive till now is using LLVM as the backend code generation.


Since then the goal of the language has definitely changed. There were quite a few reasons behind this; I realised I’m almost certainly not skilled enough in language development to come up with a language that’s nice to write while still being highly optimised for a stack based architecture, I have nowhere near enough experience with LLVM to try to bypass its built in stack management and try to roll my own and LLVM doesn’t yet have good support for any embedded devices instruction set.

Instead modern YASL is more focused on just being an exploration into a better general purpose language. Hopefully through what I learn doing this I can make another attempt at a beautiful, svelte language specifically optimised for the special challenges of ultra low power embedded development.

Probably the overriding goal for YASL is to create a language that can support as many nice features as languages such as Ruby, while being as performant in the general case as languages such as C/C#. This will involve ensuring that 90%+ of the language commonly used can be fully reconciled and optimised at compile time, whilst making it possible to easily optimise the remaining 10% at runtime. Unfortunately this will probably mean a big runtime size issue, similar to the pain felt when opening the first .NET application since a restart, hopefully it will be possible to come up with some solutions for this as well.