Shorter Work Weeks – A forgotten lever for the Automation Age

It was close to the time of the industrial revolution, when a trade union in England lobbied for 888, 8 hours for work, 8 hours for recreation and 8 hours for sleep. It’s important to note here, that the industrial revolution made quite a few artisans unemployed, but with the wealth from the automation, made this policy change possible.

Hey Elon Musk, Artificial Intelligence will not be bad like you say.

It’s everywhere in the media at the moment [2017-02-19]. Elon Musk crystal balling doom and gloom about automated intelligence, when he’s in a country made rich by automation.

The thing is, the less we do robotic mundane things, the richer economies get, and the more human we become. Please, watch this:, it’s very well articulated.

Now, I have covered all this before in my previous blog article [Robots The Working Class], and there will be pockets of mass unemployment, where Government policy has propped up flailing businesses, but overall, this transition will be quite smooth and again hugely beneficial.

But I have continued thinking about this, and made an important realisation: We need to continue to reduce our work week hours, to keep most people in some sort of traditional employment.

Back in the industrial revolution, this realisation took a while, and required a workers revolt (more about the hours, than sharing the jobs). The sooner Government masters this dusty old lever of working hours, the better.

Rather than campaigning for unemployment benefits, where there’s the damaging problem of those bludging off others, I believe Government should continue to reduce the maximum hours in the work week, and keep more people employed.

This would start in the primary and secondary industries which are being disrupted the most by automation. And would begin as a reduction of another half an hour every 5 years, and increase this pace as needed.

A lot more research needs to be done here, but this will be required as we leave the information age, into the automated age.

(This was written without much review, this article will need more consideration, editing, and I’m hoping to research this whole domain of the work week more and more. BTW, workers might get paid more as their work week reduces, so their pay is the same.)

Geelong needs a Makerspace

No one knows what you’ll discover or create. You’re unique. You have seen things, that others haven’t, and face a set of problems unique to you and your friends.

So when you play with different tools and get exposed to bigger possibilities, there’s no telling what will happen.

Here’s one example of the unexpected you see in a makerspace. I bought this thermal imaging camera, a FlirOne, for an ecology project in Cape Otway, to track predators.

Have you ever looked at a cup of tea like this before?

First you turn on the kettle. You can see the water level where it’s the hottest.

Bring it to the boil. You can’t see that line anymore, the steam has made the pot uniformly hot.

Pour (don’t forget to prepare the tea bag beforehand). Looks like lava.

Get some nice cold milk. You can see the reflection on the kettle (technically it’s not reflecting coldness – I know).

All marble-y – and that’s after mixing the milk. Probably just standard heat convection there.

Here are some videos:

I know of at least one Makerspace, space in the making for Rock O’Cashel Lane, in the CBD. Make sure you get behind Kathy Reid, and Jennifer Cromarty, and make this place legendary.

Technomotive – a new word for a digital age

Tech – no – mo – tive (adjective)

  1. A response in a person to hype over quantities, subjective quality, parameters and perceived possibilites.
  2. Discarding or overriding other important factors in a debate or decision, due to [1]
  3. Examples:
    • Being an audiophile she bought the $5000 cable, blinded by her technomotive weakness.
    • Like any car salesman, they used technomotive language, reading out the 0-100kmph acceleration time, and power output of the engine.
    • The politician knew the 100mbps figure would be technomotive to journalists and tax-payers alike.
    • Technomotive descriptions held the audience’s attention
    • The entire domain of technomotive persuasion is largely unexplored
  4. Related forms:
    Technomotively (adverb)
    Technomotiveness, Technomotivity (noun)

The need for a new word

Pathos, Ethos and Logos were recognised in ancient Greek times. Back then there were no computers or technology as we perceive it, but there were quantities. It would have been useful, but not as important as the information age. Other traditional motivators of Pathos, Ethos and Logos contribute. The desire to brag to friends and family are obvious, but not a necessary underlying motivator. The key difference in these times, is the pace of change introduces a tangible factor of

The key difference in these times, is the pace of change introduces a tangible factor of obsolescence, and the environment of expectations and culture that arise. A person who considers technomotive factors, is not necessarily technomotively pursuaded, if they balance other considerations well. Although obsolescence is objectively real, this rarely justifies getting the best and paying a premium. (Although there are logical exceptions, such as Military and Space Science)

Technomotive persuasion been a common technique for over 100 years, but never had a name. It is a word which helps analyze persuasive writing, and arguments wherever quantities or qualities are expressed or experienced, but typically in a technology context. This new word provides a handle for analysis, and is the beggining of deeper research of the domain.

A work in progress

I identified the need for this word about 6 years ago, with several attempts to articulate and refine. I hope others will find it useful and contribute more ideas and research in this domain. I’ll continue to write more below, but hopefully the writing above is a sufficient and stable definition to move forward with.

Futher examples

Lots of examples can be found in marketing material and article headlines, here are some examples of Technomotive language (in quotes):

  •  “Experience mind-blowing, heart pumping, knee shaking PhysX and …” – Technomotive language, appeals to the desire to have the best possible gaming experience.
  • “Today is so yesterday” – Technomotive language, appealing to desire to have the latest technology
  • “Planning to overclock your shiny new Core i5 system? Kingston reckons it has the RAM you need.” – Technomotive language, appeals to the desire to have the latest and most powerful technology.
  • The skyscraper was made up of 4,000T of steel and concrete
  • The new dam holds 4000GL of water, enough to…

More on Ignoring Economics

A good example is building one’s own PC. People with the money will often splurge on the best of everything, followed by benchmarking, to feed their technomotive desire for performance. When economics is considered, this isn’t the best choice. Last years technology performs 20% less, but will cost 50-80% less. Economics is less of a consideration when someone is driven by technomotive desire.
In decisionmaking, in the case of building a PC, it might be for gaming. One might justify the additional cost for the better quality of gameplay (another technomotivation). Rather than considering that economics are unfavorable in a judgemental tone, one should rather reflect that technomotive desires have the biggest influence.


Which may be used to update the core definition or the section [need for a new word]

  • “Accentuate quantities”
  • Informal word: “drooling”
  • Decouple from obsolescence – while not in the definition, it is in the detail. Obsolescence is related, but I suspect should be kept separate to clarify the definition of technomotive. The more existing terms are explored and used, the better Technomotive can be refined.
    • Technomotive – quanitities.
    • Obsolescence –
    • Nostalgia – One doesn’t think of an old computer in terms of Technomotive, we consider it obsolete, but it can have appeal by way of nostalgia.


  • Persuasion
  • Horsepower
  • Kilowatt
  • Speed
  • Power

Personal Drones – Flying “Cars” for All

Great ideas and progress rarely come from well-worn paths. How long have we waited for Flying Cars? Many have tried turning cars into sort of planes, or jet hovering machines.

Now it’s possible. Not by making cars fly, but making drones bigger to carry people.

Drones are main-stream and mature. The industry grappled with air-space and privacy rules and created auto-pilot, stability systems, and backup redundancy. Engineers have been reinvigorated to develop new algorithms and mechanical structures.

All of this is great for personal transport through the skies. With Flying Cars, we were expected to have a recreational pilot license, and although those engineers would have dreamed of auto-pilot, that was unobtainable. Drones have been a key stepping stone, and newfound success of electric vehicles also pave a new path.

I suspect there are 10-20 years to go. The most critical element remaining is battery capacity. There are workarounds and hybrids, but when batteries get a science boost you’ll see a race to market from many key companies.

So stop hoping for Flying Cars, and start saving for your Personal Drone Transport. (And hopefully they find a good name for it)


Why did Open Source Bounties Fail?

I’m shocked. I thought Bounties would supercharge Open Source development. You were the chosen one! (cringe)

So today, I wanted to post a bounty for Stasher. I did so on BountySource, but then I realised it was broken and abandoned. I looked further afield and it’s the same story, a digital landscape littered with failures.

Bounty Source

Is one of the better ones, limping along. They need a serious financial backer to grow their community faster.

  1. They seem to have a lot of server issues. Have a look at their recent twitter feed []
  2. When I posted my bounty, I did expect a tweet to go out from their account (as per my $20 add-on). Nothing. Either that subsystem is broken, or it has never been automated.
  3. Bounty Search is broken – “Internal server error.” in the console log.
  4. We know what I think about good security architecture. If people can’t talk about security correctly, it doesn’t matter if they know about bcrypt, but can they properly wield its power?
  5. No updates on Press since 2014

Freedom Sponsors

They don’t have enough of a profile, to excite me about their future. This has been executed on a shoe-string budget apparently. (I’ll try posting a bounty here if the Bounty Source one lapses)

  1. Only 12 bounties posted this year (Jan-Nov) – only 4 of those have workers, 2 of those look inactive. But at least search works.
  2. Their last Tweet was 2012

Others, are down.


This shouldn’t have happened. It failed because these startups ran out of cash and motivation.

There is massive potential here. So far we’ve seen MySpace, we need Facebook execution. And whoever does this, needs a good financial backer with connections to help grow the community.

I hope to see an open source foundation, maybe Linux Foundation, buy Bounty Source.

Stasher – File Sharing with Customer Service

(This is quite a technical software article, written with software coders in mind)

It’s time for a new file sharing protocol. P2P in general is no longer relevant as a concept, and central filesharing sites show that consumers are happy with centralised systems with a web interface. I think I have a good idea for the next incremental step, but first some history.

It’s interesting that P2P has died down so much. There was Napster and other successes which followed, but BitTorrent seems to have ruled them all. File discovery was lost, and with Universal Plug and Play a big security concern, even re-uploading is not on by default.

P2P is no longer needed. It was so valuable before, because it distributed the upload bandwidth, and also anonymised somewhat. But bandwidth continues to fall in price. MegaUpload and other like it were actually the next generation, and added some customer service around the management of the files, and charged for premium service. Dropbox and others have sort of carved out even more again.

Stash (which is hopefully not trademarked), is my concept to bring back discovery. It’s a different world, where many use VPNs and even Tor, so we don’t need to worry about security and anonymity.

It’s so simple, it’s easy to trust. With only a few hundred lines of code in a single file, one can compile their own, on Windows in seconds. So there can be no hidden backdoors. Those users who can’t be bothered with that, can download the application from a trusted source.

It works by being ridiculously simple. A dumb application which runs on your computer, is set-up to point to one or more servers. It only operates on one folder, the one it resides in. From there the servers control Stasher. A client can do any of the following, and can ban a server from doing a particular action.

And that’s it. It’s so basic, you should never have to update the client. New features should be resisted. Thumbnails should be generated on the server – because there is time and bandwidth to simply get the whole file.

All with varying software on the server, but the same Stash client. There is no direct P2P, however several servers can coordinate, such that a controller server can ask a client to upload to another specific server. Such a service can pre-package the Stash client with specific servers. Then throughout the lifetime, the client server list can be updated with new servers.

I’m thinking of building this, but I’m in no rush. I’ll make it open source. Can you think of any other applications for such a general-purpose file sharing framework?

For more information, see


Security measures ideas:

  • [Future] Code Virtual Machine
    • Only System and VM namespaces used
    • VM namespace is a separate small DLL which interacts with the system { Files, Network, System Info }
    • It’s easier to verify that the VM component is safe in manual review.
    • It’s easy to automatically ensure the application is safe
    • Only relevant for feature-extended client, which will span multiple files and more
  • [Future] Security analyser works by decompiling the software – ideally a separate project

Remaining problems/opportunities:

  • Credit – who created that original photo showing on my desktop? They should get some sort of community credit, the more votes they get. Need some sort of separate/isolated server which takes a hash and signs/stores it with datetime and potentially also with extra meta-data such as author-name/alias
    • Reviewers, while not as important should also be able to have their work registered somewhere. If they review 1000 desktop backgrounds, that’s time. Flickr for example could make a backup of such credit. Their version of the ledger could be signed and dated by a similar process.
  • Executable files and malware – 
    • AntiVirus software on the client
    • Trusting that the server makes such checks – eg. looking inside non-executables even for payloads. ie. image file tails.
  • Hacked controller
    • File filters on the client to only allow certain file types (to exclude executable files) – { File extensions, Header Bytes }
    • HoneyPot Clients – which monitor activity, to detect changes in behavior of particular controllers
    • Human operator of controller types in a password periodically to assure that it’s still under their control. Message = UTCTimestamp + PrivateKeyEncrypt(UTCTimestamp), which is stored in memory.

Food Forever?

What if we could save our spoiling food before it was too far gone? I often have half a litre of milk which spoils at the office and I have to tip it down the sink.

I’m no biochemist, so I’m hoping this idea finds a nice home with a real scientist who either debunks it or points the way forward.

Could we have a home appliance which could UHT leftover milk that we can use later or donate?

Are there other foods which could be preserved in such a way? I’m guessing most would be an ultra heat process. Like an autoclave, you need to kill all the bacteria with no regard for taste. If it’s meat, it might be tough, but it would at least be a better pet food than what’s in a can.


5 Secret Strategies for GovHack

Monday night I attended the VIC GovHack Connections Event. No there wasn’t any pizza.. but there were a selection of cheeses, artichokes and more.

Here are my Top 5 tips

1) Do something very different

This competition has been running for a number of years and the judges are seeing some similar patterns emerging. Browse through previous year’s hacker-space pages, and look at the type of projects they’ve had before. Look at the winners.

2) Evaluate the data

This might be the main aim of your project, but we want quality data for future years, and enough evidence to remove the unnecessary, find the missing, and refresh the old.

3) Prove real-time and live data

Melbourne City have their own feeds of real-time data this year. If you want to see more of that, consider using this data.

4) Simulate data

This strengthens your assessment of missing data [2], could involve a simulated live data feeds [3] above, and would be very different [1].

5) Gather data

This is actually a bit harder than simulating data [4], but very useful. You could use computer vision, web scraping, or make an open app (like OpenSignal) that many people install to collect data.

Off the record

I’ve got a few ideas for GovHack projects in mind on the day. I’m not competing, so come and talk to me on Friday night or Saturday for ideas along these lines.

Try Scope Catch Callback [TSCC] for ES6

So it has started, it wasn’t a hollow thought bubble, I have started the adventure beyond the C# nest []. It will take a while, because I still have a lot of software that still runs on C#, and I do still like the language, but all new development will be on ES6 and NodeJS.

So I’m going to record my outlook over a few blog posts. I re-discovered Cloud9 IDE, and I’ve got a few thoughts on architecture and a new feature for ES6.

Today, I’ll tell the world about my proposed ES6 enhancement.

Despite the ECMAScript committee stating they are about “Standards at Internet Speed”, there isn’t much Internet tooling in there to make it happen. They have certainly been successful making rapid progress, but where does one submit an idea to the committee? There’s not even an email link. I’m certainly not going to cough up around $100k AUD to become a full member. [Update: They use GitHub, a link on their main website to this would be great. Also check out:]

So I’ll be satisfied to just put my first ES6 idea here.

Try blocks don’t work in a callback world. I’m sure there are libraries which could make this nicer. In C# Try blocks do work with the async language features for instance.

So here is some code which won’t catch an error

    $http.get(url).then((r) => {
catch (e)

In this example, if there is an error during the HTTP request, it will go uncaught.

That was simple, though. How about a more complex situation?

function commonError(e) {

    runSQL(qry1, (result) => {
        runSQL(qry2, (result) => {
        }, commonError)
catch (e)

Callback nesting isn’t very nice. This is why `await` is pushed forward as a good candidate. But what if the API you target doesn’t implement Promise? What if you only sometimes define a try block?

My proposal is to supply a method which gets the Try Scope Catch Callback [TSCC]. If you don’t return a promise, it would be like this:

function get(url, then, error) {
  var error | window.callback.getTryScopeCatchCallback(); //TSCC

  //error occurs:

  //This could be reacting another 
  //try/catch block or as a result 
  //of callback from another error method

Promises already have a catch function in ES6. They’re so close! A Promise should direct its the error/catch callback to the TSCC by default. If the Promise spec was updated to include this, my first example of code above would have caught the error with no changes in code.

So what do you think ECMA members, can we get this into ECMAScript?

Feedback log – from maillist

  • kdex

Why not just transform callback-based APIs into `Promise`s and use (presumably ES2017)
`await`/`async` (which *does* support `try`/`catch`)?

e. g.:
try {
await curl(““);
/* success */
catch (e) {
/* error */

  • My response

1. Whether you await or not, the try scope’s catch callback [TSCC] should still be captured.

2. If there is no use of Promise (for coders own design reasons) the try scope’s catch callback [TSCC] should be available

GovHack – Do we need real-time feeds?

It’s the year 2016, and we still don’t know how many minutes away the next bus is in Geelong.

Public releases of data take time and effort, and unless they are routinely refreshed, they get stale. But there’s certain types of information that can’t be more than minutes old to be useful.

Traffic information is the most time sensitive. The current state of traffic lights, whether there are any signals currently out of order, and congestion information is already collected real-time in Australia. We could clearly benefit from such information being released as it happens.

But imagine this benchmark of up-to-the-minute was applied to all datasets. First of all you won’t have any aging data. But more importantly it would force the data publication to be automated, and therefore scalable so that instead of preparing another release of data, public servants could be focusing on the next type of data to make available.

What do you think?

Participate in GovHack this year, play with the data we do have and continue the conversation with us.

(I will be publishing a series of blogs focusing on GovHack, exploring opportunities and challenges that arise and consider while I work on the committee for the Geelong GovHack which runs 29-31 July 2016)

Image courtesy Alberto Otero García licensed under Creative Commons

GovHack – What tools will you use this year?

The world is always changing, and in the world of technology it seems to change faster.

You certainly want to win some of the fantastic prizes on offer, but remember, we want world changing ideas to drive real change for real people, and we can do that best together.

So share with us and your fierce competitors, which new tools and techniques you plan to use this year.

Some new popular that I’m aware of, include Kafka and MapMe.

Both of these feed into my own personal desire to capture more data and help Governments release data real-time. Check them out, and please comment below about any tools and techniques you plan to use this year.

(I will be publishing a series of blogs focusing on GovHack, exploring opportunities and challenges that arise and consider while I work on the committee for the Geelong GovHack which runs 29-31 July 2016)

Image courtesy RightBrainPhotography licensed under Creative Commons

What data do you want to see at GovHack?

Lets forget about any privacy and national security barriers for the moment. If you could have any data from Government what would you request?

GovHack is a great initiative which puts the spotlight on Government data. All of the departments and systems collect heaps of data every day, and lucky for us they’re starting to release some of it publicly.

You can already get topological maps, drainage points, bin locations, bbq locations, council budget data and much more. But that’s certainly not all the data they have.

Comment below on what data you would think is useful. It might already be released, but it would be interesting to go to Government with a nice long shopping list of data to be ready for us to delve into next year.

(I will be publishing a series of blogs focusing on GovHack, exploring opportunities and challenges that arise and consider while I work on the committee for the Geelong GovHack which runs 29-31 July 2016)

Image courtesy Catherine, licensed under Creative Commons

GovHack – How can we collect more data?

If we had all the cancer information from around the world, any keyboard warrior could wrangle the data and find helpful new discoveries. But we struggle to even complete a state-level database let alone a national or global one.

After being dazzled by the enormous amount of data already released by Government, you soon realise how much more you really need.

For starters, there are lots of paper records not even digital. This isn’t just a Government problem of course, many private organisations also grapple with managing unstructured written information on paper. But if Governments are still printing and storing paper in hard copy form; we further delay a fully open digital utopia. At the very least storing atomic data, separate to a merged and printed version enables future access, and stops the mindless discarding into the digital blackhole.

Then consider all the new types of data which could be collected. The routes that garbage delivery trucks and buses take and the economics of their operation. If we had such data streams, we could tell citizens if a bus is running ahead or behind. We could have GovHack participants calculate more efficient routes. Could buses collect rubbish? We need data to know. More data means more opportunities for solutions and improvement for all.

When you consider the colossal task ahead of Government, we must insist on changing culture so that data releases are considered a routine part of public service. And also make further data collection an objective, not a bonus extra. Until that happens, large banks of knowledge will remain locked up in fortresses of paper.

What do you think? Do you know of any forgotten archives of paper that would be useful for improving lives?

Participate in GovHack this year, play with the data we do have and continue the conversation with us.

(I will be publishing a series of blogs focusing on GovHack, exploring opportunities and challenges that arise and consider while I work on the committee for the Geelong GovHack which runs 29-31 July 2016)

Image courtesy Fryderyk Supinski licensed under Creative Commons

Why I want to leave C#

Startup performance is atrocious, critically, that slows down development. It’s slow to get the first page of a web application, navigating to whole new sections, and worst: initial Entity Framework LINQ queries.

It’s 2016, .Net is very mature but this problem persists. I love the C# language much more above Java, but when it comes to the crunch, the run-time performance is critical. Yes I was speaking of startup performance, but you encounter that in new areas of the software warming up and also when the AppPool is recycled (scheduled every 13 hours by default). Customers see that most, but it’s developers who must test and retest.

It wastes customers and developers time. Time means money but the hidden loss is focus. You finally get focused to work on a task, but then have to wait 30 seconds for an ASP.NET web page to load up so you can test something different. Even stopping your Debugging in VS can take 10s of seconds!

There are told ways to minimise such warmup problems, with native generation and EF query caching. Neither are a complete solution. And why workaround a problem not experienced in node.js and even PHP!

.Net and C# are primarily for business applications. So how important is it to optimise a loop over millions of records (for big data and science) over the user and developer experience of run and start with no delay?

Although I have been critical of Javascript as a language, recent optimisation are admirable. It has been optimised with priority for first-use speed, and critical sections are optimised as needed.

So unless Microsoft fix this problem once and for all, without requiring developers to coerce workarounds, they’re going to find long term dedicated coders such as myself shifting to Javascript, especially now that ECMAScript and TypeScript make Javascript infinitely more palateable.

I have already recently jettisoned EF in favour of a proprietary solution which I plan to open source. I also have plans for node.js and even my own IDE which I plan to lease. I’m even thinking of leaving the Managed world altogether – Heresy!

.Net has lots going for it, it’s mature and stable, but that’s not enough anymore. Can it be saved? I’m not sure.

My Patch for the Internet Security Hole

I just posted another article about the problem, but there are several steps which could be taken today to plug the hole. Although that won’t protect any historical communications. This involves doubling security with post-quantum cryptography (PQC) and also the use of a novel scheme that I propose here.

PQC can’t be used alone today, it’s not proven. Many of the algorithms used to secure internet communication today was thoroughly researched, peer reviewed, and tested. It stands the test of time. But PQC is relatively new, and although accelerated efforts could have been taken years ago to mature sooner, they were not. That doesn’t mean PQC doesn’t have a part to play today.

Encryption can be overlapped, yielding the benefits of both. RSA for example is very mature, but breakable by a Quantum Computer. Any PQC is immature, but theoretically unbreakable by a Quantum Computer. By combining these the benefits of both are gained, with additional CPU overhead. This should be implemented today.

Standards need to be fast tracked, and software vendors implement with haste. Only encapsulation is required, like a tunnel in a tunnel. But TLS likely already has the ability for dual-algorithm protection built into the protocol. I’m yet to find out.

In addition to the doubling described above, I have a novel approach, Whisp. Web applications (ignoring oAuth) store a hash of a password for each user, this hash can help to form a key to be used during symmetric encryption. Because symmetric encryption is also mature and unbreakable (even by Quantum Computer), it’s an even better candidate for doubling. But it would require some changes to Web Application login process, and has some unique disadvantages.

Traditionally, in a web application, a TLS session is started which secures the transmission of a login username and password. Under Whisp, the 100% secured TLS session would only be able to start after the user enters the password. The usual DH or RSA process is used to generate a symmetric key for the session, but then that key is processed further using the hash of the users’ password (likely with a hashing algorithm). Only if the user enters the correct password, will the secure tunnel be active and communication continue. There are still draw backs to this approach however.

The initial password still needs to be communicated to the server upon registration. So this would work well for all established user accounts, but creation of new user accounts would require additional protections (perhaps PQC doubling) when communicating a new password.

I would favor the former suggestion of PQC doubling, but there may well be good reasons to also use Whisp. And it shouldn’t be long before PQC can be relied upon on its own.

Busted! Internet Community Caught Unprepared

Internet Security (TLS) is no longer safe. That green HTTPS word, the golden padlock, all lies. The beneficiaries: trusted third parties who charge for certificates. Yes, it sounds like a scam, but not one actively peddled, this one is from complacency from the people who oversee the standards of the internet. Is there bribery involved? Who knows.

A month ago, there were no problems with TLS. Because it was only the 6th of October when a paper was published which paves the way to build machines which can break TLS. Update: Now a whole Q-computer architecture has been designed publically (what has been done privately?), and can be built under $1B. These machines are called Quantum Computers. So where’s the scam?

The nerds behind the Internet, knew long ago about the threat of developing such a machine. They also knew that new standards and processes could be built unbreakable even by a Quantum Computer. But what did they do? They sat on their hands.

I predicted in 2010 that it would take 5 years before a Quantum Computer would be feasible. I wasn’t specific about a mass production date. I was only 4 months out. Now it’s feasible for all your internet traffic to be spied on, including passwords, if the spy has enough money and expertise. But that’s not the worst part.

Your internet communication last year may be deciphered also. In fact, all of your internet traffic of the past, that you thought was safe, could be revealed, if an adversary was able to store it.

I wrote to Verisign in 2010 and asked them what they were doing about the looming Internet Emergency, and they brushed my concern aside. True, users have been secure to date, but they knew it was only a Security Rush. Like living in the moment and getting drunk, not concerned about tomorrow’s hangover, users have been given snake oil, a solution that evaporates only years later.

All of these years, money could have been poured into accelerated research. And there are solutions today, but they’re not tested well enough. But the least that could be done is a doubling of security. Have both the tried and tested RSA, as well as a new theoretically unbreakable encryption, in tandem.

Why is there still no reaction to the current security crisis? There are solid solutions that could be enacted today.


The Fraying of Communication and a proposed solution: Bind

In medicine the misinterpretation of a doctors notes could be deadly. I propose that the ambiguity, of even broader discourse, has a serious and undiscovered impact. This problem needs to be researched, and will be expounded further but I would like to explore a solution, which I hope will further open your understanding of the problem.

As with all effective communication, I’m going to name this problem: Fraying. For a mnemonic, consider the ends of a frayed string being one of the many misinterpretations.

His lie was exposed, covered in mud, he had to get away from his unresponsive betraying friend: the quick brown fox jumped over the lazy dog.

That’s my quick attempt of an example where context can be lost. What did the writer mean? What can a reader or machine algorithm misinterpret it to mean? Even with the preceding context, the final sentence can actually still be interpreted many ways. It’s frayed in a moderate way with minor impact.

In this example, it would be possible for the author to simply expound further on that final sentence, but that could ruin the rhythm for the reader (of that story). Another method, is to add such text in parenthesis. Either way, it’s a lot of additional effort by multiple parties. And particularly in business, we strive to distill our messages to be short, sharp and to the point.

My answer of course is a software solution, but one where plain text is still handled and human readable. It’s a simple extensible scheme, and again I name it: Bind (going with a string theme).

The quick [fast speed] brown fox [animal] jumped [causing lift] over [above] the lazy dog [animal]

With this form, any software can present the data. One with understanding of the scheme, can remove the square brackets if there is no facility for an optimized viewing experience. For example:

The quick brown fox jumped over the lazy dog

(Try putting your mouse over the lighter coloured words)

Since the invention of the computer and keyboard, such feats have been possible, but not simply, and certainly not mainstream.

So it would be important to proliferate a Binding text editor which is capable of capturing the intent of the writer.

The benefits of Binding go beyond solving Fray. They add more context for disability accessibility (I would argue Bind is classed as an accessibility feature – for normative people), and depending on how many words are Bound, even assist with language translation.

Imagine Google Translate with a Binding text editor, the translations would be much more accurate. Imagine Google search, where you type “Leave” and hover over the word and select [Paid or unpaid time off work], leaving you less encumbered with irrelevant results.

Such input for search and translation need not wait for people to manually bind historical writing. Natural Language Processing can bear most of the burden and when reviewing results, a human can review the meaning the computer imputed, and edit as needed.

We just need to be able to properly capture our thoughts, and I’m sure we’ll get the hang of it.

Hey, by the way, please add your own narrative ideas for “the quick brown fox jumped over the lazy dog”, what other stories can that sentence tell?

Appendix – Further Draft Specification of Bind:

Trailer MetaData Option:

  • Benefit: the metadata is decoupled visually from the plain text. This makes viewing on systems without support for the Bind metadata still tolerable for users.
  • Format: [PlainText][8x Tabs][JSON Data]
  • Json Schema: { BindVersion: 1, Bindings: […], (IdentifierType: “Snomed”) }
  • Binding Schema: { WordNumber: X, Name: “Z”, Identifier: “Y”, Length: 1}
  • Word Number: Word index, when words are delimited by whitespace and punctuation is trimmed.

Mixed MetaData Option:

  • When multiple preceding words are covered by the Binding, a number of dash indicates how many more words are covered. Bind Text: “John Smith [-Name]” indicates the two words “John Smith” are a Name.
  • The identifiers specified in ontological databases such as Snomed, may be represented with a final dash and then the identifier. Bind Text: “John Smith [-Name-415]” indicates a word definition identifier of 415, which may have a description of “A person’s name”.
  • When a square bracket is intended by the author, output a double square bracket. Bind Text: “John Smith [-Name] [[#123456]]” renders plainly to “John Smith [#123456]”

Music needs Intelligence and hard work

In all things, I believe a persons’ overall intelligence is the first factor which determines their performance. Some of the best sporting athletes find themselves running successful business ventures. The same goes for the best comedians. Of course hard work and training are necessary for any craft that an intelligent person applies themselves to, but good outcomes seldom happen by accident.

Today I stumbled across a YouType clip – Haywyre – Smooth Criminal and concluded that this was one smart guy, and at such a young age! This assumption was further supported by some brief research through other news articles about him. He has done most of the mastering of his albums and wouldn’t be surprised if he produced the YouTube video clip and website on his own too! When such intelligence collides with a focused hard work ethic this is what you get. Of the music articles so far written, I don’t think any of the writers have realized yet that they are writing about a genius just getting started.

His style definitely resonates with me, with his Jazz and Classical roots, but most importantly for me is the percussive expression that drives his compositions. Too many people will be captivated by the improvisation in the melody, but that’s only one layer of his complex compositions. If he’s still working solo, he will need to find good people to collaborate with into the future to reach his full potential. I hope Martin applies himself to other genres of music and other pursuits.

I happen to work in the software development industry, and found that it doesn’t matter how much schooling or experience someone has had, anyone can have their potential capped by their overall intelligence. One’s brain capacity is somewhat determined by genes, diet and early development. Once you have fully matured, there’s little or no ability to increase your brain power. That would be confronting for a lot of people who find themselves eclipsed by giants of thought.

So it’s no wonder why intelligence is seldom a measure of a person these days. Musicians are often praised as being talented for their good music, but that excludes all others: they must have some magical talent to succeed. As with creativity, the truth is less interesting, but very important. We should be pushing young children to develop intelligence, and value intelligence, not Hollywood “talent”. I suspect that valuing intelligence publicly, risks implying lack of intelligence for ineffective musicians (the same applying to other crafts).

Don’t let political politeness take over.

Digital Things

The “Internet of Things” is now well and truly established as a mainstream buzzword. The reason for its success could be explored at length, however this term is becoming overused, just like “Cloud”. The term has come to mean many different things to different people in different situations. “Things” works well to describe technology reaching smaller items, but “Internet” is only a component of a broader field that we can call Digital Things.

This Digital Things revolution is largely driven by the recent accessibility of tools, such as Arduino, Raspberry Bi and more. Miniaturization of computing that stretches even the definition of embedded computing. Millions of people are holding such tools in their hands wondering what to do with them. They all experience unique problems, and we see some amazing ideas emerge from these masses.

In health, the quantified self may eventually see information flow over the internet, but that’s not what all the fuss is about. Rather, it’s about Information from Things. Measuring as much as we can, with new sensors being the enablers of new waves of information. We want to collect this information and analyse it. Connecting these devices to the internet is certainly useful to collect and analyse this information.

Then there are many applications for the Control of Things. Driverless cars are generally not internet connected, neither are vacuum robots, burger building machines, a novel 100k colour pen or many many more things. It would seem the of the term Internet of Things as inspiration limits the possibilities.

In the end, Digital Things is the most suitable term to describe what we are seeing happen today. We are taking things in our lives which normally require manual work, and using embedded electronics to solve problems, whether it be for information or control, the internet is not always necessary.

Lets build some more Digital Things.

Civilisation Manual

Lakshadweep, comprising tiny low-lying islands...
Image via Wikipedia

What would happen if an asteroid struck our planet and left a handful of people to restart civilisation? Or if you and few people washed up on an uninhabited island with nothing but the shirt on your back? Many would picture building huts, scavenging for food, starting some basic crops if possible. But that would be it, the limit. You wouldn’t comprehend completely rebuilding civilisation and luxuries available as they are today. But I do, I’m curious, what would it take? If all you could take with you was a book, what would be written in that book, what does the Civilisation Manual say?

Whenever there is talk of civilisation it seems that all you hear is philosophy, but seldom the practicality of achieving it. I assert that the creation of such a Civilisation Manual would be a useful undertaking, not so much for its hypothetical uses, but rather for the ability to teach how modern economies work. I believe that such a book should be able to contain all, if not more, information taught to children in a school. Such a book might be very large.

There would also be additional questions to be said of the hypothetical end of the world scenario. How long would it take

LONDON, ENGLAND - FEBRUARY 21: The sign for t...
Image by Getty Images via @daylife

to rebuild a civilisation to current day technology? What tools would most quickly speed up the process? Is there a minimum amount of people required for this to work? What level of intelligence is required to execute? Just one genius? How long until the female primeval desire for shopping is satisfied? And the perfect shoe manufactured?

Encyclopaedia Beliana 1
Image via Wikipedia

I would love to see a community website started to collect such information. We already have Wikipedia, but you are not told the intimate detail of how to find iron ore, how to cast iron, how to produce flour from wheat or how to build a crude resistor or capacitor to help you make more refined components. It is this knowledge which is hard to find, perhaps we are forgetting how we build a digital civilisation.

Also, given the opportunity to build a civilisation from scratch, there may be some interesting ideas which could be included, never encountered in history before. For example, the book could focus on automation, relieving the humans from hard and repetitive tasks. This could go even further than what is achieved today. In 10 years, perhaps robots will be washing and ironing clothes, cooking meals, etc..

What a Civilisation Manual should NOT contain:

  • Advertising
  • References to Gilligan’s Island
  • Everything – put in the most useful and if you have time add more.

What a Civilisation Manual should contain:

  • Very brief justifications of suggestions – it’s not a history book, it’s a survival book. It’s good to reassure the reader of the thought which goes into each of the suggestions in the book. Such as, if X happens to a person, cut their leg off. Briefly describing blood poisoning might be more reassuring.
  • Tried and tested procedures and instructions – can a 10-year-old kid work it out, or does it require an academic professor? and do you replace the palm frond roof monthly or yearly?
  • Many appendices:
    • A roadmap to digital civilisation – showing a tree of pre-requisite steps and sections on achieving each of the steps.
    • Recipes – Particularly useful when all you’ve got is coconuts and fish. How do you clean a fish?
    • Inter-language Dictionary – who knows who you’ll be with.
    • Plant Encyclopaedia – Identification of and uses for plants.
    • Animal  Encyclopaedia – Do I cuddle the bear?
    • Health Encyclopaedia – How do I deliver the baby?

And an example of chapters:

  • Atomic coffee maker designed by Giordano Robbiati
    Image via Wikipedia

    Something like “Don’t panic, breathe… you took the right book, in 5 years you’ll have a coffee machine again”

  • Chapter 1: Basic Needs – You’ll find out about these first, food, water, shelter.
  • Chapter 2: Politics and Planning – Several solutions for governing the group should be provided to choose from, a bit like a glossy political catalogue. It won’t contain things like Dictatorship, Monarchy. More like Set Leader, Rotating Leader or The Civilisation Manual is our leader. Planning will mostly be pre-worked in the appendix, where technology succession is described with expected timelines for each item.
  • Chapter 3: Power  – No not electricity, power. This section explains its importance and how to harness power, from wind/water for milling to animals for plowing. Of course the progression of civilisation would eventually lead to electricity.
The book should also contain several pencils, many blank pages and maybe we could sneak it a razor blade. This doesn’t break the rules of only being allowed to have a book. Publishers are always including CD’s and bookmarks…
I think it would be interesting anyway…
Enhanced by Zemanta

Robots – the working class

Rage Against the Machine
Image via Wikipedia

I have found myself considering whether doom would really befall the world if we mass employed robots to do all of our dirty work. Would we be overrun by machines which rose up and challenged their creators? Would our environment be destroyed and over polluted? I think not. In fact our lives would be much more comfortable and we would have a lot more time.

Life on earth got a lot better around the 1800s, the dawn of the industrial age. In the two centuries following 1800, the world’s average per capita income increased over 10-fold, while the world’s population increased over 6-fold [see Industrial Revolution]. Essentially, machines, aka. very simplistic robots made human lives much better. With steam power and improved iron production, the world began to see a proliferation of machines which could make fabrics, work mines, machine tools, increase production of consumables, enable and speed up the construction of key infrastructure. Importantly, it is from the industrial revolution from which the term Luddite originated, those who resisted machines because their jobs were offset.

We now find ourselves 200 or so years later, many of us in very comfortable homes, with plenty of time to pursue hobbies and leisure. There does however, remain scope for continued development, allowing machines and robots to continue to improve the lives of people. It is understood that one or more patents actually delayed the beginning of the industrial age, and of course is why I advocate the Technology Development Zones which have relaxed rules regarding patents. However, I believe there is a very entrenched Luddite culture embedded into society.

Now being the organiser of the campaign, I have been accused of being a Luddite myself. However no progress has lasted without a sound business case. Furthermore, Luddites of the industrial revolution were specifically those put out of business by the machines.

Therefore the current Luddites are currently or potentially:

  • The Automotive Industry status quo. – Movement to Electric Cars will make hundreds of thousands redundant. Consider how simple an electric car is {Battery, Controller, Motor, Chassis, Wheels, Steering}, and how complicated combustion engines are with the addition and weight of the radiator, engine block, oil, timing, computer,… And all the component manufacturers, fitters, mechanics and further supporting industries that will be put out of business.
  • The Oil industry (and LN2) – Somewhat linked to the Automotive industry. Energy could very well be transmitted through a single distribution system – electricity – at the speed of light. No more oil tankers, no more service stations, no more oil refineries, no more oil pipelines, no more oil mining, no more petrol trucks, no more oil spills. (The replacement for oil needs to be as economical or more economical – no ideologies here).
  • Transport industry – Buses, Trains, Trucks, Taxis, Sea Freight and even air travel all currently employ many thousands to sit in a seat and navigate their vehicle. Technology exists to take over and do an even better job. It’s not just the safety concerns delaying such a transition but also the Luddites (and patent squatters).
  • Farming – The technology is possible. We could have economical fruit picking machines, and many mega farm operations already have automatic harvesters for grain. Imagine all those rural towns having already under threat of becoming ghost towns having to contend with technology taking replacing hard workers.
  • Manufacturing – Is already very efficient, but we still see thousands of people on production lines simply pressing a button. Most manufacturing jobs could be obliterated with only one or two required to overlook a factory – how lonely.
  • House Wifes – Are possibly not Luddites, given many would relish even more time for leisure and their family, however so many of their tasks could be completely centralised and automated. Cooking and associated appliances could be completely abolished, why buy an oven, dishwasher, sink, fridge, freezer, cupboards, dinnerware, pots, pans, stove, and then spend 1-2 hours a day in the kitchen and supermarket when you could potentially order your daily meals from an industrial kitchen where all meals are prepared by robots for a fraction of the cost and time?
  • Construction – It’s amazing how many people it takes to build a skyscraper or house. Why does it still require people to actually build them? Why can’t houses be mass pre-fabricated by machines in factories then assembled by robots on-site? How many jobs would be lost as a result?
  • Services sector – There are many services sector jobs where software and robots could easily be designed and built to relieve such workers from their daily tasks. Accounting could be streamlined such that all business and personal finances are managed by software completely, with robots now aiding in surgery why can’t robots actually perform the surgery or give a massage, or pull a tooth? Why are there so many public servants dealing with questions and answers and data-entry when we have technology such as that found in WATSON able to take over such tasks? Even many general practitioners are resisting the power available for self-diagnosis – do you think they’ll fund the further development of such tools?
  • Mining – Is as crude as grain farming and could easily be further automated, making thousands and thousands redundant in mines, and even those surveying future mining sites.
  • Education – How important is it to have children learn as much as possible while they’re young (beyond simple skills such as reading, writing and arithmetic), when the whole world could be run by software and robots? When complicated questions can be answered by a computer instead of a professor? Why lock children behind desks for 20 hours a week when they could be out playing?
  • Bureaucracy – With no workers there would be no unions and no union bosses, no minimum wage, no work safety inspector…
  • Military – (Ignoring the ideology of world peace) We already see the success of the UAV, an aircraft which flies autonomously only requiring higher lever command inputs for it’s mission. Why enhance soldiers when you can have robot soldiers? War could even be waged without blood, with the winner having enough fire-power at the end to force the loser to surrender outright (quite ridiculous in reality – I know).
  • Care – There are many employed to look after sick and elderly. Even though the work can be challenging and the pay often low it’s still a job, a job that robots can potentially do instead.
With time such a list could easily be expanded to encompass everyone. Are we all collectively resisting change?
With a world full of robots and software doing everything, what do humans do with 100% unemployment? Do we all dutifully submit our resumes to Robot Inc three times a week? Would we all get on each others nerves? Do we need to work? Would we lose all purpose? Ambition? Dreams?
To best understand how a robot utopia works, just simplify the equation to one person – yourself on an island. You could work everyday of your life to make sure you have enough water, food and shelter or if you arrived on the island with a sufficient compliment of robots you could enjoy being stranded in paradise. Every step in between from doing everything yourself toward doing nothing yourself, sees your level of luxury increasing.
There’s no doubt that the world will be divided into two classes, those that are human and have a holiday everyday, and those that are robots – the working class.
Enhanced by Zemanta

Improve security with compression

Block cipher encryption.
Image via Wikipedia

I have a particular interest in encryption and how to make it stronger.Whilst considering OTP and its vulnerability of reusing a random or psuedorandom stream on plain-text, I was simulating the problem with a puzzle I have come across in the past. (Ever played one of those cryptoquip puzzles in the paper, where one letter is equivelent to another letter? You look at the small words and with trial and error guess words until they make sense across the whole sentance.)

I realised that encryption is significanly affected by the entropy of the input plain-text. As far as I know this is an unproven hypothesis. However it is at least easily verifyable for simple encryption, such as that found in the cryptoquip puzzle. I believe that source entropy losses it’s significance in overal security as the encryption method itself improves. However this may only be because once encryption is significantly strong doubling it would have no perceivable outcome. For example AES is considered one of the strongest, if not the strongest symmetric encrytion algorithm to date. Doubling the trillions and trillions of computing power required to break is not readily perceivable by our minds (and ten digits on our hands).

It is commonly accepted that you should compress before you encrypt, because encryption increases entropy which eliminates the ability for any valuable compression. It should be noted though that compression also increases entropy which in light of this article, is very good for security.

If you want good security you should consider using compression as well. You will have the benefit of an improved cipher as well as shorter messages. Perhaps compression can improve cipher strength enough such that some more computationally efficient ciphers are as strong or stronger than AES.

I hope that one day we will see an encryption scheme which incorporates compression in its design. It may also incorporate some other mechanisms to further increase the entropy of inputted plain-text data. Building a joint compression/encryption algorithm may also yeild performance improvements over seperate coherent compression and encryption steps.

It all sounds promising, but this is not an undertaking which I am experienced enough in to tackle.

Enhanced by Zemanta

Revisiting DIDO Wireless

A wireless icon
Image via Wikipedia

I’ve had some time to think about the DIDO wireless idea, and still think it has a very important part to play in the future – assuming the trial conducted of 10 user nodes is truthful. Before I explore the commercial benefits of this idea, I will first revisit the criticisms as some have merit, and will help scope a realistic business case.



  • One antenna per concurrent node – The trial used 10 antenna for 10 user nodes. Each antenna needs a fixed line or directional wireless backlink – this would imply poor scalability of infrastructure. [Update: This is likely so, but Artemis claim the placement of each antenna can be random – whatever is convienient]
  • Scalability of DIDO – We are told of scaling up to 100s of antenna in a given zone. I question the complexity of the calculations for spatial dependent coherence, I believe the complexity is exponential rather than linear or logarithmic. [Update: Artemis pCell website now claims it scales linearly]
  • Scalability of DIDO controller – Given the interdependence on signals, is the processing parellelisable? If not this also limits the scale of deployment. [Update: Artemis claim it scales linearly]
  • Shannon’s Law not broken – The creators claim breaking the Shannon’s law barrier. This appears to be hyperbole. They are not increasing the spectrum efficiency, rather they are eliminating channel sharing. The performance claims are likely spot on, but invoking “Shannon’s Law” was likely purely undertaken to generate hype. Which is actually needed in the end, to get enough exposure for such a revolutionary concept.


Discussion surrounding neutralised claims which may be reignited, but are not considered weaknesses or strengths at this point in time.

  • Backhaul – Even though the antenna appear to require dispersed positioning, I don’t believe that backhaul requirements to the central DIDO controller need to be considered a problem. They could be fixed line or directional wireless (point to point). [Update: This is not really a problem. Fibre is really cheap to lay in the end for backhaul, it’s most expensive for last-mile. Many Telcos have lots of dark fibre, not being used and Artemis is partnering with Telcos, rather than trying to compete with them]
  • DIDO Cloud Data Centre – I take this as marketing hyperbole. Realistically a DIDO system needs a local controller, all other layers above such a system are distractions from the raw technology in question. And as such, the communication links between the local controller and antenna need not be IP transport layer links, but would rather be link layer or even physical layer links.
  • Unlimited number of users – Appears to also be hyperbole, there is no technological explanation for such a sensational claim. We can hope, but not place as Pro until further information is provided. [Update: It does scale linearly, so this is a fair claim when compared to current Cell topology or if pCell was was limited to exponential processing load]
  • Moving User Nodes – Some may claim that a moving node would severely limit the performance of the system. However this pessimistically assumes a central serial CPU based system controls the system (a by-product of Reardens “Data Centre” claims). In reality I believe it’s possible for a sub-system to maintain a matrix of parameters for the main system to encode a given stream of data. And all systems may be optimised with ASIC implementation. Leaving this as a neutral but noteworthy point.
  • Size of Area of Coherence – Some may claim a problem with more than 1 person in an area of coherence, assumed to be around one half wavelength. How many people do you have 16cm away from you (900Mhz)? Ever noticed high density urbanisation in the country? (10-30Mhz for ionosphere reflection – <15M half wavelength) [Update: demonstrations have shown devices as close as 1cm away from each other – frequency may still be a limiting factor of course, but that is a good result]
  • DIDO is MIMO – No it’s very similar, but not the same and is likely inspired by MIMO. Generally MIMO is employed to reduce error, noise, multipath fading. DIDO is used to eliminate channel sharing. Two very different effects. MIMO Precoding creates higher signal power at a given node – this is not DIDO. MIMO Spatial multiplexing requires multiple antenna on both the transmitter and receiver, sending a larger bandwidth channel via several lower bandwidth channels – DIDO nodes only need one antenna – this is not DIDO. MIMO Diversity Coding is what it sounds like, diversifying the same information over different antenna to overcome wireless communication issues – this is not DIDO. [Update: Artemis and the industry and now standardising calling it a C-RAN technology]
  • 1000x Improvement – Would this require 1000 antenna? Is this an advantage given the amount of antenna required? MIMO is noted to choke with higher concurrency of uses. Current MIMO systems with 4 antenna can provide up to 4x improvement – such as in HSPDA+. Is MIMO limited in the order of 10s of antenna? Many many questions… [Update: This is likely so, but Artemis claim the placement of each antenna can be random – whatever is convenient]


  • Contention – Once a user is connected to a DIDO channel, there is no contention for the channel and therefore improved latency and bandwidth.
  • Latency – Is a very important metric, perhaps as important as bandwidth. Latency is often a barrier to many innovations. Remember that light propagates through optical fibre at two-thirds the speed of light.
  • Coverage – It seems that DIDO will achieve coverage and field less black spots than what is achievable with even cellular femtocell. Using new whitespace spectrum, rural application of pCell would be very efficient, and if rebounding off the Ionosphere is still feasible, the answer to high speed, high coverage rural internet.
  • Distance – DIDO didn’t enable ionosphere radio communications, but it does make ionosphere high bandwidth data communication possible. Elimination of inter-cell interference and channel sharing make this very workable.
  • Physical Privacy – The area of coherence represents the only physical place the information intended for the user can be received and sent from. There would be potential attacks on this physical characteristic, by placing receivers adjacent to each DIDO antenna, and mathematically coalescing their signals for a given position. Of course encryption can still be layered over the top.
  • Bandwidth – The most obvious, but perhaps not the most important.
  • [New] Backward Compatibility – Works with existing LTE hardware in phones. Works better if using a native pCell modem with better latency performance particularly. Seamless handoff to cell networks, so it can co-operate.
  • [New] Wireless Power – Akbars (See Update below) suggested this technique could be used for very effective Wireless Power, working over much larger distances than current technology. This is huge!

Novel Strength

This strength needed particular attention.

  • Upstream Contention Scheduling – The name of this point can change if I find or hear of a better one. (TODO…)

Real World Problems

Unworkable Internet-Boost Solutions

I remember reading of a breakthrough where MEMS directional wireless was being considered as an internet boost. One would have a traditional internet connection and when downloading a large file or movie, the information would be sufficiently cached in a localised base station (to accommodate a slow backlink or source) and then forwarded to the user as quickly as possible. This burst would greatly improve download times and a single super speed directional system would be enough to service thousands of users given its’ extreme speed and consumers limited need for large transfers. Of course even such a directional solution is limited to line of sight, perhaps it would need to be mounted on a stationary blimp above a city…

Mobile Call Drop-outs

How often do you find yourself calling back someone because your call drops out? Perhaps it doesn’t happen to you often because you’re in a particularly good coverage area, but it does happen to many people all the time. The productivity loss and frustration is a real problem which needs a real solution.

Rural Service

It is very economical to provide high-speed communication to many customers in a small area, however when talking of rural customers the equations are reversed. Satellite communication is the preferred technology of choice, but it is considerably more expensive, is generally a lower bandwidth solution and subject to poor latency.

Real World Applications

The anticipated shortcomings of DIDO technology need not be considered as deal breakers for the technology. The technology still has potential to address real world problems. Primarily we must not forget the importance/dominence of wireless communications.

Application 1: A system could be built such that there may be 10 areas of coherence (or more), and can be used to boost current technology internet connections. One could use a modest speed ADSL2+ service of 5Mbps and easily browse the bulk of internet media {Text, Pictures} and then still download a feature-length movie at gigabit speeds when downloaded. This is a solution for the masses.

Application 2: DIDO allows one spectrum to be shared without contention, but that spectrum need not be a single large allocation of spectrum, it could mean a small (say 512Kbps) but super low latency connection. In a 10 antenna system, with 20Mhz of spectrum and LTE-like efficiency this could mean 6000 concurrent active areas of coherence. So it would enable very good quality mobile communication, with super low latency and practically no black-spots. It would also enable very effective video conferencing. All without cellular borders.

Applications 3 and 4: The same as Applications 1 and 2, but using a long-range ionosphere rural configuration.


We still don’t know too much about DIDO, the inventors have surrounded their idea with much marketing hype. People are entitled to be cautious, our history is littered with many shams and hoaxes, and as it stands the technology appears to have real limitations. But this doesn’t exclude the technology from the possibility of improving communication in the real world. We just need to see Rearden focus on finding a real world market for its’ technology.


  • [2017-01-10] Finally, the hint text has dissappeared completely, to be replaced with
    • “supports a different protocol to each device in the same spectrum concurrently” – following up on their last update
    • “support multiple current and future protocols at once.” – this is a great new insight. They have right up top, that pCell supports 5G, and future standards. So without considering the increased capacity, customers don’t need to keep redeploying new hardware into the field.
    • “In the future the same pWave Minis will also support IoT” – there are standards floating around, and what better way to implement security for IoT, than physically isolated wireless coherence zones, and perhaps very simplistic modulation.
    • “precise 3D positioning” – This confirms one of my predictions, pCell can supercharge the coming autopilot revolution
    • “and wireless power protocols” – as I always suspected. However, it still seems impractical. This is likely just a candy-bar/hype statement.
    • “Or in any band from 600 MHz to 6 GHz” – it’s interesting to learn this specification – the limits of typical operation of pCell. I note they have completely abandoned long-wave spectrum (for now at least).
    • “pWave radios can be deployed wherever cables can be deployed” – I still think fibre/coax is going to be necessary, wireless backhaul is unlikely to be scalable enough.
    • “Typically permit-free” – does this refer to the wireless signal I wonder? Very interesting if so. It could also refer carrier licensing, because you’re only carrying data, information is only deduced back at the data centre.
    • “can be daisy-chained into cables that look just like cable TV cables” (from Whitepaper) – so perhaps long segments of coax are permitted to a base-station, but that base-station would likely require fibre out.
    • “pCell technology is far less expensive to deploy or operate than conventional LTE technology” – they are pivoting away from their higher-capacity message, now trying to compete directly against Ericson, Huawei, and others.
  • [2016-02-25] pCell will unlock ALL spectrum for mobile wireless. No more spectrum reservations. pCell could open up the FULL wireless spectrum for everyone! I hope you can grasp the potential there. Yesterday I read a new section on their website: “pCell isn’t just LTE”. Each pCell can use a different frequency and wireless protocol. This means you can have an emergency communication and internet both using 600Mhz at the same time meters away! In 10 years, I can see the wireless reservations being removed, and we’ll have up to TERABITS per second of bandwidth available per person. I’m glad they thought of it, but this is going to be the most amazing technology revolution of this decade, and will make fibre to the home redundant.
  • [2015-10-03] It’s interesting that you can’t find Hint 1 on the Artemis site, even when looking back in history (Google), in fact the date of 2015-02-19 it reads “Feb 19, 2014 – {Hint 2: a pCell…”, which is strange given my last update date below. Anyway the newest Hint may reveal the surprise:
    • “Massless” – Goes anywhere with ease
    • “Mobile” – outside your home
    • “Self-Powered” – either Wireless Power (unlikely) or to wit that this pCell is like some sort of Sci-Fi vortex that persists without power from the user.
    • “Secure” – good for privacy conscious and/or business/government
    • “Supercomputing Instance” – I think this is the real clue, especially given Perlman’s history with a Cloud Gaming startup previously.
    • My best guesses at this stage in order of likelihood:
      • It’s pCell VR – already found in their documentation, and they just haven’t updated their homepage. VR leverages the positioning information from the pCell VRI (virtual radio instance) to help a VR platform both with orientation as well as rendering.
      • Car Assist – Picks up on “Secure” and the positioning information specified for VR. VR is an application of pCell to a growing market. Driverless is another growing market likely on their radar. Driverless cars have most trouble navigating in built up, busy environments and particularly round abouts. If pCell can help in any way, it’s by adding a extra absolute position information source this cannot be jammed. Of course the car could also gain great internet connectivity too, as well as tracking multiple vehicles centrally for more centralised coordination.
      • Broader thin-client computing, being beyond “just communications”, although one can argue against that – pCell is communications an enabler. This would include business and gaming.
      • Emergency Response. Even without subscription it would be feasible to track non-subscribers location.
  • [2015-02-19] Read this article for some quality analysis of the technology – [Archive Link] – Old broken link:
  • [2015-02-19] Artemis have on their website – “Stay tuned. We’ve only scratched the surface of a new era.…{Hint: pCell technology isn’t limited to just communications}’ – I’m gunning that this will be the Wireless Power which Akbars suggested in his blog article. [Update 2015-10-03 which could be great for electric cars, although efficiency would still be quite low]
  • [2016-06-02] Technical video from CTO of Artemis –
    • Better coverage – higher density of access points = less weak or blackspots
    • When there are more antenna than active users, quality may be enhanced
    • Typical internet usage is conducive for minimising number antenna for an area
    • pCell is not Massive MIMO
    • pCell is Multi User Spatial Processing – perhaps MU-MIMO [see Caire’03, Viswanath’03, Yu’04]
    • According to mathematical modelling, densely packed MIMO antenna cause a large radius of coherent volume. Distributed antenna minimises the radius of coherent volume. Which is intuitive.
    • see 4:56 – for a 3D visulasation of 10 coherent volumes [spatial channels with 16 antennas. Antenna are 50m away from users – quite realistic. Targetting 5dB sinr.
    • pCell Data Centre does most of the work – Fibre is pictured arriving at all pCell distribution sites.
    • 1mW power for pCell, compared to 100mW for WiFi. @ 25:20
Enhanced by Zemanta

What the… Payroll tax?

I didn’t really notice the GST debate, except being annoyed at all prices increasing when GST was introduced (I was in High School). It turns out the a major reason for it’s introduction was to eliminate many state taxes. One of these taxes being Payroll tax….

Have a look:

It turns out that if I employ too many people I will have to pay the state 4.9% tax on all the gross wages paid to my employees – including Superannuation! Not only is this a disincentive to employ, it’s also yet another administrative burden which limits growth. I hear it all the time, that ultimate success requires flexibility and scalability – Payroll tax is an ugly and unnecessary burden.

Sure we can’t just pull such revenue out from under the states, but it can be replaced with revenue from another more efficient tax – such as GST. At just 10% our GST is relatively low compared to other countries, in Europe some countries have a GST or VAT of 25%.

So why not simply increase GST? Consumers, AKA voters are the end-users and effectively the ones who pay the tax. Even though consumers can ultimately pay less in the long run, because the companies no longer need to pay payroll tax, the whole economy changes. Smaller business that didn’t previously pay Payroll tax are effectively charging their customers more, because they cannot discount from regained revenue from a dropped tax. Small changes to the rate over a long time may work best with matched reductions in payroll tax in the states. But in summary GST rate increases are political poison for non-business owning voters.

Another issue is fraud. As GST increases, the returns on VAT fraud become greater. Countries such as Sweden (25%) and the UK (20%) are subjected to simple but hurtful frauds which effectively steal from GST revenue. It basically works by having a fake company be liable to pay GST, and a legitimate company entitled to the return. The fake company goes bankrupt. As the GST rate increases, the amount of payback to such frauds increases, encouraging more incidents. It seems that any macro economic change, either short term (Government Stimulus) or long term (Tax Reform), opens the door for corruption and rorting. If the GST rate is to be increased, the right legislation needs to be in place to prevent such fraud.

So in the end the ultimate way for a business to overcome Payroll tax is to innovate good products which provide a comfortable return and innovate inside the business to improve internal efficiency, reducing the need to hire as many staff resulting in the ability to maintain a competitive edge.

DIDO – Communication history unfolding?

First of all I just want to say – THIS MAY BE HUGE!!

I read this article last night:

In plain english, a company has discovered a way to dramatically improve mobile internet. It will be 5 – 10 years before it’s commercialised, however I believe it will happen sooner, with many realising just how revolutionary it will be, investing more money, attracting more resources to get it done sooner.

I am not a representative of the compnay, but have been involved in understanding and pondering wireless technology, even coming up with faster and more efficient wireless communication concepts, but none as ground-breaking as this one. I don’t claim to know all the details for certain, but having read the whitepaper I beleive I can quite accurately assume many details and future considerations. Anyway I feel it’s important for me to help everyone understand it.

How does it work (Analogy)?

Imagine walking down the street, everything is making noise, cars, people, the wind. It’s noisy, someone in the distance is trying to whisper to you. Suddenly all the noise dissappears, and all you can hear is that person – clearly. This is because someone has adjusted all the sounds around you to cancel out, leaving just that persons voice.

How does it work (Plain English)?

When made available in 5-10 years:

  • Rural users will have speeds as fast as in CBDs, receving signals from antennas as far as 400km away!
  • In cities there will need to be several antennas in within ~60km of you
    • today there are so many required for mobile internet, the number will be reduced…
    • the number of antennas per mobile phone towers and buildings will be reduced to just one.
  • there will be a central “server” performing the mathematical calculations necessary for the system.

The most technical part (let’s break it down):

  1. Unintended interference is bad (just to clarify and contrast)…
  2. DIDO uses intereference, but in a purposeful way
  3. DIDO uses multiple antennas, so that at a particular place (say your house), they interfere with each other in a controlled way, leaving a single channel intended for you.
  4. It’s similar to how this microphone can pick up a single voice in a noisy room –
    but a little different…

How does it work (Technical)?

I have been interested in two related concepts recently:

  1. Isolating a single sound in a noisy environment –
  2. I saw an interview with an ex-Australian spy who worked at a top secret facility in Australia in co-operation with the US. The guy was releasing a book revealing what he can. From this facility he spied on radio communications around the world. I wondered how and then figured they likely employ the “super microphone” method.

When I heard about this technology last night, I didn’t have time to look at the whitepaper, but assumed the receivers may have “super microphone” sort of technology. It turns out the inverse (not opposite) is true.


User A’s radio is surrounded by radios from DIDO. The DIDO server calculates what signals need to be generated from the various radios such that when converging on User A, they “interfere” as predicted to leave the required signal. When there are multiple users the mathematical equations take care of working out how to converge the signals. As a result, the wireless signal in the “area of coherence” for the user, is as if the user has the full spectrum 1:1 to an external wireless base station.

Implications for domestic backhaul

There would need to be fibre links to each of the antennas deployed, but beyond that remaining backhaul and dark fibre will rapidly become obsolete. DIDO can reach 400km in the rural mode, bouncing off the ionosphere and still maintaining better latency than LTE at 2-3ms.

Physical Security?

We hear about quantum communication and the impossibility to decipher the messages. I believe a similar concept of physical security can be achieved with DIDO. Effectively DIDO provisions areas of coherency. Areas in 3D space where the signals converge, cancelling out signal information intended for other people. So effectively you only physically receive a single signal on the common spectrum, you can’t physically see anyone else’s data, unless you are physically in the target area of coherency. This however, does not mean such a feature enables guaranteed privacy. By deploying a custom system of additional receivers that can sit outside the perimeter of your own area of coherency, you can sample the raw signals before they converge. Using complex mathematics and empowered with information of the exact location of the DIDO system antennas, one would be theoretically able to single out the individual raw signals from each antenna, and the time of origin and then calculate the converged signal at alternative areas of coherence. This is by no means a unique security threat. Of course one could simply employ encryption over their channel for secrecy.

This doesn’t break Shannon’s law?

As stated in their white paper, people incorrectly apply the law to spectrum rather than channel. Even before DIDO, one could use directional wireless from a central base station and achieve 1:1 channel contention (but that’s difficult to achieve practically). DIDO creates “areas of coherency” where all the receiving antenna picks up is a signal only intended for them.

Better than Australia’s NBN plan

I’ve already seen some people attempt to discredit this idea, and I believe they are both ignorant and too proud to give up their beloved NBN. I have maintained the whole time that wireless technology will exceed the NBN  believers interpretation of Shannon’s law. Remember Shannon’s law is about *channel*, not *spectrum*. DIDO is truly the superior option, gigabit speeds with no digging! And clearly a clear warning that governments should never be trusted with making technology decisions. Because DIDO doesn’t have to deal with channel access, the circuitry for the radios is immensely simplified. The bottleneck will likely be the ADC and DACs, of which NEC has 12bit 3.2Giga-sample devices ( So multi-terabit and beyond is no major problem as we wait for the electronic components to catch up to the potential of wireless!


  • One aspect to beware of is the potential need for 1:1 correlation of antennas from the base station and users. I can’t find any literature yet which either confirms or denies such a fixed correlation. But the tests for DIDO used 10 users and 10 antennas.
  • If there must be one antenna per user this idea isn’t as earth shattering as I would hope. However there would still be relevance. 1) It still achieves 100% spectrum reuse, 2) all the while avoiding the pitfalls of centralised directional systems with beam-forming where obstacles are an issue. 3) Not to mention the ability to leverage the ionosphere for rural applications – very enabling.
  • After reading the patent (2007) – I see no mention of the relationship between AP antennas and the number of users. However I did see that there is a practical limit of ~1000 antennas per AP. It should be noted that if this system does require one antenna per user, it would still be very useful as a boost system. That is, everyone has an LTE 4G link and then when downloading a video, get the bulkier data streamed very quickly via DIDO. (The amount of concurrent DIDO connections being limited by the number of AP antennas)
  • The basis for “interference nulling” discussed in 2003 by Agustin et al.
  • Removed many  ! at the top, to symbolise the potential for disappointment.
  • Hey there’s a spell check!
  • Have a look here for whirlpool discussion:

Memorable IPv6 Addresses

Back in Nov 2009, I foresaw that IPv6 addresses would become a menace to memorise. So I had a crack at improving memorability of such addresses, See The basic idea is that sounds which make up words or resemble words are much easier to remember than individual digits. I was actually thinking about this idea last night, how it could be applied to remembering strong passwords.

This morning I got an email from a collegue who pointed out this: I don’t believe the scheme used here is as memorable as mine, but it sounds like other people are having similar ideas.

Back to my thoughts last night on more memorable passwords. We know we’re supposed to use Upper and Lower case, special symbols etc. But even then you’re not using the full 64bit capacity of the full 8 character recommended string. To use my scheme to memorise more secure passwords, you would simply use my tool.

I made a video 🙂


Phishing Drill – Find your gullible users

Do you remember participating in fire drills in school? I remember them fondly – less school work for the day. I also remember earthquake drills when I went to school in Vancouver for a year. So what to drills do? They educate us about the signs and signals to look out for, and then how to react. I believe spam filters work fairly well (that was a sudden change of subject). I use gmail and spam detection is built-in, however I still do receive the occasional spam message. Education of those who fall for spam and phishing is an important factor in reducing associated problems and scams. If all internet users had their wits about them, we could put spammers and phishers out of the business – and most door to door salesmen. So how do we achieve this without million dollar advertising campaigns?…. Drills. Spam/Phishing Drills, or to be more generic, perhaps Internet Gullability Drills (IGD – everyone loves an initialism).

How do you drill the whole of the Internet? “Attention Internet, we will be running a drill at 13:00 UTC”…. probably definitely not. My proposed method involves every web application, which liaises with their customers by email or is at risk of being spoofed in a phishing scam, to have their own private drills. Such a drill would involve sending out an email message which resembles a real life phishing/spam email. Each time different variables could be used – email structure, sender email, recipients name, a direct link to a spoof site. In any case the drill should be able to detect those who fall for the drill. They can then be notified of their stupidity in the matter in a more delicate way than most would – “Haha – you just fell for our IGD you loser!”, is way off.

Ultimately a Gullability prevention centre website would exist which the users could be referred to, so they may refresh themselves in current threats, how to identify them and how to react. Quite a simple solution, and maybe I’m not the first one to think about it, I didn’t bother searching the Internet for a similar idea…


Creativity. Just Pulleys and Levers.

Growing up as a kid, I was captivated by magic tricks and wanted to know how they were done. Pulling a rabbit out of a hat, the slight of hand, the magnets, the hidden cavity. They would have you believe that they were achieving something beyond the physical laws, that they had a supernatural power. TV shows and literature thrive in unveiling the surprising simple process of even the most elaborate illusions.

Creativity is the last remaining magic trick.

Western culture goes to great lengths to idolize and mystify it. “It’s a gift”, “It’s a talent”, “They must use the right side of their brain”. Paintings and artworks are highly prized, some running into the millions of dollars. The creative process in the mind seems elusive and magic. Society seems to think that creativity is only for a select few. The fanfare and mystique of creativity, adds to the performance.

They’re wrong.

Creativity is a simple process of random noise and judgement, two very tangible logical concepts. It’s a process. Like a magician’s rabbit in a hat. This doesn’t take away from the impact of the product of creativity, but it does dispell the super human status of the skill.

Small Things

Creativity doesn’t just happen once in an artwork, it happens multiple times at different levels, in small amounts, but always with the same components of random noise and judgment.

A painter may start with a blank canvas and no idea of what they will paint. They then recall memories, images and emotions which all feed as both random noise and experience for judgement. They then choose a scene, the first round of creativity has occurred.

The painter will not recall perfectly all the details of the scene, but will have to choose how the scene would be composed. In their mind they imagine the horizon, the trees, perhaps a rock, or a stream, each time picturing in their minds different locations and shapes and judging aesthetic suitability. Another round of creativity has occurred, with many more elements of creation. Once painting with a brush in their hand, a painter may think ahead of the texture of the rock, the direction of the stream, the type of tree, the angle and amount of branches, the amount of leaves, and the colours.


They may stand back and look at what they have painted and decided to change an element. In this case, their single painting is one possibility of randomization and they have judged it to be substandard. They then picture other random forms and corrections and judge the most appropriate course of action.

That whole process is the sum of smaller decisions, with good judgement and a flow of random ideas.

Small things everywhere

This is transferable to music composing. Instead of visualising, like the painter, they play different melodies in their mind. Many musicians fluke a new melody. They make a mistake on their instrument or purposefully allow themselves to play random notes. With judgement, they select appropriate phrases.

It also works for the lyrics for a song. Lyricists, have a sea of words and language moving through their mind, and often randomise. How many words go through your head when you’re trying to find a word that rhymes? With good judgement and some planning the final set of lyrics, can inspire. But there are plenty of draft pieces of paper in the bin.

The end products from creativity can be very impressive, but an artist won’t discount their work as being merely time and small things. There is one exception though. Vincent Van Gogh famously said, “Great things are done by a series of small things brought together”.

Design vs Performance

At this point, it’s very important to comprehend two components of art. Design and Performance. Once a painting has been designed it’s easy to reproduce – or perform. Now, the painter may have refined their design through performance, however they are left with a blueprint at the end for reproduction. Music is constructed in the same way, and is easily reproduced by many musicians. Lyrics can be recited or sung to music by a performer.

So what part of art is actually creative? Often the performance is almost a robotic function. Jazz is combines performance and design at the same time. It’s the design, the improvisation that supplies the creative credential. Design is the crucial creative element. A painter creating the correct strokes on a canvas is simply a well practiced performance.

Random is inspiration

Randomisation can be, and is most often external. Anything we can receive at a low level through our five senses and at a higher level through those senses, such as emotion. An executive is often presented with several options, and uses judgement to select the most appropriate. They are not producing a painting or song, however their process is still creativity – to society a rather boring form of creativity. Software development is considered a very logical process, however the end product is legally considered copyrighted literature. How could something so logical be attributed a magic like status? This always conflicted in my mind before, however understanding creativity as noise and judgement in design and performance cycles, helped to rationalise creativity back to the mortal domain, and consequently allow myself to understand why software design is art.


I expect any artist who reads this article, to be beside themselves – “software isn’t art!”. But it’s the same as uncovering the secret of a magicians trick. Artists are rightly protecting their trade secret, which doesn’t bother me. I like the occasional magic show.


An expanded creativity formula:

R = Randomisation
J = Judgement
C = Creativity

C = J(R) – “Creativity is a function of Judgement of Randomisation”, as described above.

A break down of the formula’s components – and further insight of my perceptions of the lower level concepts – (more for myself to map it out)

E = Experience
A = Article to be judged – Perceptions though senses and feelings

J = F(A,E,K) – Judgement is a function of Knowledge, Experience against an Article to be judged

M = Memory
SFJ = Senses and Feelings and Past Judgement

E = M(SFJ) – Experience is a class of Memory, that of senses, feelings and past judgement


Poor Pets

I’ve got a dog, her name is Pipin, she’s a border collie and she has a lot of personality. We take her for walks, knowing that she needs it, and as a result end up getting out more, getting excercise for ourselves and meeting other people walking their dogs. It’s amazing how social people are with pets. You can sit at a bus stop for 10 minutes and not talk to the person next to you, but when you have a dog it’s all different.

Having said that, I can’t stand Australia’s sudden increase in the purchase of pets and associated goods. Some people go to the Nth degree to pamper their pooch. Now what people enjoy doing is up to them, and I wouldn’t think it’s a problem in general if half the world wasn’t struggling in extreme poverty.

This is just a quick post. I reckon pet owners should accept responsability and have a percentage of their purchases go to a charity. For example, with one pet you would pay an additional 25% of all costs associated with your dog to a charity. If you have two dogs, then 50% of all the costs of the second go to a charity. This could be enforced by government or council. Obviously breeders, aid dogs and border security dogs would be exempt. Just a thought, what do you think?