Shorter Work Weeks – A forgotten lever for the Automation Age

It was close to the time of the industrial revolution, when a trade union in England lobbied for 888, 8 hours for work, 8 hours for recreation and 8 hours for sleep. It’s important to note here, that the industrial revolution made quite a few artisans unemployed, but with the wealth from the automation, made this policy change possible.

Hey Elon Musk, Artificial Intelligence will not be bad like you say.

It’s everywhere in the media at the moment [2017-02-19]. Elon Musk crystal balling doom and gloom about automated intelligence, when he’s in a country made rich by automation.

The thing is, the less we do robotic mundane things, the richer economies get, and the more human we become. Please, watch this:, it’s very well articulated.

Now, I have covered all this before in my previous blog article [Robots The Working Class], and there will be pockets of mass unemployment, where Government policy has propped up flailing businesses, but overall, this transition will be quite smooth and again hugely beneficial.

But I have continued thinking about this, and made an important realisation: We need to continue to reduce our work week hours, to keep most people in some sort of traditional employment.

Back in the industrial revolution, this realisation took a while, and required a workers revolt (more about the hours, than sharing the jobs). The sooner Government masters this dusty old lever of working hours, the better.

Rather than campaigning for unemployment benefits, where there’s the damaging problem of those bludging off others, I believe Government should continue to reduce the maximum hours in the work week, and keep more people employed.

This would start in the primary and secondary industries which are being disrupted the most by automation. And would begin as a reduction of another half an hour every 5 years, and increase this pace as needed.

A lot more research needs to be done here, but this will be required as we leave the information age, into the automated age.

(This was written without much review, this article will need more consideration, editing, and I’m hoping to research this whole domain of the work week more and more. BTW, workers might get paid more as their work week reduces, so their pay is the same.)

Geelong needs a Makerspace

No one knows what you’ll discover or create. You’re unique. You have seen things, that others haven’t, and face a set of problems unique to you and your friends.

So when you play with different tools and get exposed to bigger possibilities, there’s no telling what will happen.

Here’s one example of the unexpected you see in a makerspace. I bought this thermal imaging camera, a FlirOne, for an ecology project in Cape Otway, to track predators.

Have you ever looked at a cup of tea like this before?

First you turn on the kettle. You can see the water level where it’s the hottest.

Bring it to the boil. You can’t see that line anymore, the steam has made the pot uniformly hot.

Pour (don’t forget to prepare the tea bag beforehand). Looks like lava.

Get some nice cold milk. You can see the reflection on the kettle (technically it’s not reflecting coldness – I know).

All marble-y – and that’s after mixing the milk. Probably just standard heat convection there.

Here are some videos:

I know of at least one Makerspace, space in the making for Rock O’Cashel Lane, in the CBD. Make sure you get behind Kathy Reid, and Jennifer Cromarty, and make this place legendary.

Technomotive – a new word for a digital age

Tech – no – mo – tive (adjective)

  1. A response in a person to hype over quantities, subjective quality, parameters and perceived possibilites.
  2. Discarding or overriding other important factors in a debate or decision, due to [1]
  3. Examples:
    • Being an audiophile she bought the $5000 cable, blinded by her technomotive weakness.
    • Like any car salesman, they used technomotive language, reading out the 0-100kmph acceleration time, and power output of the engine.
    • The politician knew the 100mbps figure would be technomotive to journalists and tax-payers alike.
    • Technomotive descriptions held the audience’s attention
    • The entire domain of technomotive persuasion is largely unexplored
  4. Related forms:
    Technomotively (adverb)
    Technomotiveness, Technomotivity (noun)

The need for a new word

Pathos, Ethos and Logos were recognised in ancient Greek times. Back then there were no computers or technology as we perceive it, but there were quantities. It would have been useful, but not as important as the information age. Other traditional motivators of Pathos, Ethos and Logos contribute. The desire to brag to friends and family are obvious, but not a necessary underlying motivator. The key difference in these times, is the pace of change introduces a tangible factor of

The key difference in these times, is the pace of change introduces a tangible factor of obsolescence, and the environment of expectations and culture that arise. A person who considers technomotive factors, is not necessarily technomotively pursuaded, if they balance other considerations well. Although obsolescence is objectively real, this rarely justifies getting the best and paying a premium. (Although there are logical exceptions, such as Military and Space Science)

Technomotive persuasion been a common technique for over 100 years, but never had a name. It is a word which helps analyze persuasive writing, and arguments wherever quantities or qualities are expressed or experienced, but typically in a technology context. This new word provides a handle for analysis, and is the beggining of deeper research of the domain.

A work in progress

I identified the need for this word about 6 years ago, with several attempts to articulate and refine. I hope others will find it useful and contribute more ideas and research in this domain. I’ll continue to write more below, but hopefully the writing above is a sufficient and stable definition to move forward with.

Futher examples

Lots of examples can be found in marketing material and article headlines, here are some examples of Technomotive language (in quotes):

  •  “Experience mind-blowing, heart pumping, knee shaking PhysX and …” – Technomotive language, appeals to the desire to have the best possible gaming experience.
  • “Today is so yesterday” – Technomotive language, appealing to desire to have the latest technology
  • “Planning to overclock your shiny new Core i5 system? Kingston reckons it has the RAM you need.” – Technomotive language, appeals to the desire to have the latest and most powerful technology.
  • The skyscraper was made up of 4,000T of steel and concrete
  • The new dam holds 4000GL of water, enough to…

More on Ignoring Economics

A good example is building one’s own PC. People with the money will often splurge on the best of everything, followed by benchmarking, to feed their technomotive desire for performance. When economics is considered, this isn’t the best choice. Last years technology performs 20% less, but will cost 50-80% less. Economics is less of a consideration when someone is driven by technomotive desire.
In decisionmaking, in the case of building a PC, it might be for gaming. One might justify the additional cost for the better quality of gameplay (another technomotivation). Rather than considering that economics are unfavorable in a judgemental tone, one should rather reflect that technomotive desires have the biggest influence.


Which may be used to update the core definition or the section [need for a new word]

  • “Accentuate quantities”
  • Informal word: “drooling”
  • Decouple from obsolescence – while not in the definition, it is in the detail. Obsolescence is related, but I suspect should be kept separate to clarify the definition of technomotive. The more existing terms are explored and used, the better Technomotive can be refined.
    • Technomotive – quanitities.
    • Obsolescence –
    • Nostalgia – One doesn’t think of an old computer in terms of Technomotive, we consider it obsolete, but it can have appeal by way of nostalgia.


  • Persuasion
  • Horsepower
  • Kilowatt
  • Speed
  • Power

Personal Drones – Flying “Cars” for All

Great ideas and progress rarely come from well-worn paths. How long have we waited for Flying Cars? Many have tried turning cars into sort of planes, or jet hovering machines.

Now it’s possible. Not by making cars fly, but making drones bigger to carry people.

Drones are main-stream and mature. The industry grappled with air-space and privacy rules and created auto-pilot, stability systems, and backup redundancy. Engineers have been reinvigorated to develop new algorithms and mechanical structures.

All of this is great for personal transport through the skies. With Flying Cars, we were expected to have a recreational pilot license, and although those engineers would have dreamed of auto-pilot, that was unobtainable. Drones have been a key stepping stone, and newfound success of electric vehicles also pave a new path.

I suspect there are 10-20 years to go. The most critical element remaining is battery capacity. There are workarounds and hybrids, but when batteries get a science boost you’ll see a race to market from many key companies.

So stop hoping for Flying Cars, and start saving for your Personal Drone Transport. (And hopefully they find a good name for it)


Stasher – File Sharing with Customer Service

(This is quite a technical software article, written with software coders in mind)

It’s time for a new file sharing protocol. P2P in general is no longer relevant as a concept, and central filesharing sites show that consumers are happy with centralised systems with a web interface. I think I have a good idea for the next incremental step, but first some history.

It’s interesting that P2P has died down so much. There was Napster and other successes which followed, but BitTorrent seems to have ruled them all. File discovery was lost, and with Universal Plug and Play a big security concern, even re-uploading is not on by default.

P2P is no longer needed. It was so valuable before, because it distributed the upload bandwidth, and also anonymised somewhat. But bandwidth continues to fall in price. MegaUpload and other like it were actually the next generation, and added some customer service around the management of the files, and charged for premium service. Dropbox and others have sort of carved out even more again.

Stash (which is hopefully not trademarked), is my concept to bring back discovery. It’s a different world, where many use VPNs and even Tor, so we don’t need to worry about security and anonymity.

It’s so simple, it’s easy to trust. With only a few hundred lines of code in a single file, one can compile their own, on Windows in seconds. So there can be no hidden backdoors. Those users who can’t be bothered with that, can download the application from a trusted source.

It works by being ridiculously simple. A dumb application which runs on your computer, is set-up to point to one or more servers. It only operates on one folder, the one it resides in. From there the servers control Stasher. A client can do any of the following, and can ban a server from doing a particular action.

And that’s it. It’s so basic, you should never have to update the client. New features should be resisted. Thumbnails should be generated on the server – because there is time and bandwidth to simply get the whole file.

All with varying software on the server, but the same Stash client. There is no direct P2P, however several servers can coordinate, such that a controller server can ask a client to upload to another specific server. Such a service can pre-package the Stash client with specific servers. Then throughout the lifetime, the client server list can be updated with new servers.

I’m thinking of building this, but I’m in no rush. I’ll make it open source. Can you think of any other applications for such a general-purpose file sharing framework?

For more information, see


Security measures ideas:

  • [Future] Code Virtual Machine
    • Only System and VM namespaces used
    • VM namespace is a separate small DLL which interacts with the system { Files, Network, System Info }
    • It’s easier to verify that the VM component is safe in manual review.
    • It’s easy to automatically ensure the application is safe
    • Only relevant for feature-extended client, which will span multiple files and more
  • [Future] Security analyser works by decompiling the software – ideally a separate project

Remaining problems/opportunities:

  • Credit – who created that original photo showing on my desktop? They should get some sort of community credit, the more votes they get. Need some sort of separate/isolated server which takes a hash and signs/stores it with datetime and potentially also with extra meta-data such as author-name/alias
    • Reviewers, while not as important should also be able to have their work registered somewhere. If they review 1000 desktop backgrounds, that’s time. Flickr for example could make a backup of such credit. Their version of the ledger could be signed and dated by a similar process.
  • Executable files and malware – 
    • AntiVirus software on the client
    • Trusting that the server makes such checks – eg. looking inside non-executables even for payloads. ie. image file tails.
  • Hacked controller
    • File filters on the client to only allow certain file types (to exclude executable files) – { File extensions, Header Bytes }
    • HoneyPot Clients – which monitor activity, to detect changes in behavior of particular controllers
    • Human operator of controller types in a password periodically to assure that it’s still under their control. Message = UTCTimestamp + PrivateKeyEncrypt(UTCTimestamp), which is stored in memory.

Food Forever?

What if we could save our spoiling food before it was too far gone? I often have half a litre of milk which spoils at the office and I have to tip it down the sink.

I’m no biochemist, so I’m hoping this idea finds a nice home with a real scientist who either debunks it or points the way forward.

Could we have a home appliance which could UHT leftover milk that we can use later or donate?

Are there other foods which could be preserved in such a way? I’m guessing most would be an ultra heat process. Like an autoclave, you need to kill all the bacteria with no regard for taste. If it’s meat, it might be tough, but it would at least be a better pet food than what’s in a can.


5 Secret Strategies for GovHack

Monday night I attended the VIC GovHack Connections Event. No there wasn’t any pizza.. but there were a selection of cheeses, artichokes and more.

Here are my Top 5 tips

1) Do something very different

This competition has been running for a number of years and the judges are seeing some similar patterns emerging. Browse through previous year’s hacker-space pages, and look at the type of projects they’ve had before. Look at the winners.

2) Evaluate the data

This might be the main aim of your project, but we want quality data for future years, and enough evidence to remove the unnecessary, find the missing, and refresh the old.

3) Prove real-time and live data

Melbourne City have their own feeds of real-time data this year. If you want to see more of that, consider using this data.

4) Simulate data

This strengthens your assessment of missing data [2], could involve a simulated live data feeds [3] above, and would be very different [1].

5) Gather data

This is actually a bit harder than simulating data [4], but very useful. You could use computer vision, web scraping, or make an open app (like OpenSignal) that many people install to collect data.

Off the record

I’ve got a few ideas for GovHack projects in mind on the day. I’m not competing, so come and talk to me on Friday night or Saturday for ideas along these lines.

Try Scope Catch Callback [TSCC] for ES6

So it has started, it wasn’t a hollow thought bubble, I have started the adventure beyond the C# nest []. It will take a while, because I still have a lot of software that still runs on C#, and I do still like the language, but all new development will be on ES6 and NodeJS.

So I’m going to record my outlook over a few blog posts. I re-discovered Cloud9 IDE, and I’ve got a few thoughts on architecture and a new feature for ES6.

Today, I’ll tell the world about my proposed ES6 enhancement.

Despite the ECMAScript committee stating they are about “Standards at Internet Speed”, there isn’t much Internet tooling in there to make it happen. They have certainly been successful making rapid progress, but where does one submit an idea to the committee? There’s not even an email link. I’m certainly not going to cough up around $100k AUD to become a full member. [Update: They use GitHub, a link on their main website to this would be great. Also check out:]

So I’ll be satisfied to just put my first ES6 idea here.

Try blocks don’t work in a callback world. I’m sure there are libraries which could make this nicer. In C# Try blocks do work with the async language features for instance.

So here is some code which won’t catch an error

    $http.get(url).then((r) => {
catch (e)

In this example, if there is an error during the HTTP request, it will go uncaught.

That was simple, though. How about a more complex situation?

function commonError(e) {

    runSQL(qry1, (result) => {
        runSQL(qry2, (result) => {
        }, commonError)
catch (e)

Callback nesting isn’t very nice. This is why `await` is pushed forward as a good candidate. But what if the API you target doesn’t implement Promise? What if you only sometimes define a try block?

My proposal is to supply a method which gets the Try Scope Catch Callback [TSCC]. If you don’t return a promise, it would be like this:

function get(url, then, error) {
  var error | window.callback.getTryScopeCatchCallback(); //TSCC

  //error occurs:

  //This could be reacting another 
  //try/catch block or as a result 
  //of callback from another error method

Promises already have a catch function in ES6. They’re so close! A Promise should direct its the error/catch callback to the TSCC by default. If the Promise spec was updated to include this, my first example of code above would have caught the error with no changes in code.

So what do you think ECMA members, can we get this into ECMAScript?

Feedback log – from maillist

  • kdex

Why not just transform callback-based APIs into `Promise`s and use (presumably ES2017)
`await`/`async` (which *does* support `try`/`catch`)?

e. g.:
try {
await curl(““);
/* success */
catch (e) {
/* error */

  • My response

1. Whether you await or not, the try scope’s catch callback [TSCC] should still be captured.

2. If there is no use of Promise (for coders own design reasons) the try scope’s catch callback [TSCC] should be available

GovHack – Do we need real-time feeds?

It’s the year 2016, and we still don’t know how many minutes away the next bus is in Geelong.

Public releases of data take time and effort, and unless they are routinely refreshed, they get stale. But there’s certain types of information that can’t be more than minutes old to be useful.

Traffic information is the most time sensitive. The current state of traffic lights, whether there are any signals currently out of order, and congestion information is already collected real-time in Australia. We could clearly benefit from such information being released as it happens.

But imagine this benchmark of up-to-the-minute was applied to all datasets. First of all you won’t have any aging data. But more importantly it would force the data publication to be automated, and therefore scalable so that instead of preparing another release of data, public servants could be focusing on the next type of data to make available.

What do you think?

Participate in GovHack this year, play with the data we do have and continue the conversation with us.

(I will be publishing a series of blogs focusing on GovHack, exploring opportunities and challenges that arise and consider while I work on the committee for the Geelong GovHack which runs 29-31 July 2016)

Image courtesy Alberto Otero García licensed under Creative Commons

GovHack – What tools will you use this year?

The world is always changing, and in the world of technology it seems to change faster.

You certainly want to win some of the fantastic prizes on offer, but remember, we want world changing ideas to drive real change for real people, and we can do that best together.

So share with us and your fierce competitors, which new tools and techniques you plan to use this year.

Some new popular that I’m aware of, include Kafka and MapMe.

Both of these feed into my own personal desire to capture more data and help Governments release data real-time. Check them out, and please comment below about any tools and techniques you plan to use this year.

(I will be publishing a series of blogs focusing on GovHack, exploring opportunities and challenges that arise and consider while I work on the committee for the Geelong GovHack which runs 29-31 July 2016)

Image courtesy RightBrainPhotography licensed under Creative Commons

What data do you want to see at GovHack?

Lets forget about any privacy and national security barriers for the moment. If you could have any data from Government what would you request?

GovHack is a great initiative which puts the spotlight on Government data. All of the departments and systems collect heaps of data every day, and lucky for us they’re starting to release some of it publicly.

You can already get topological maps, drainage points, bin locations, bbq locations, council budget data and much more. But that’s certainly not all the data they have.

Comment below on what data you would think is useful. It might already be released, but it would be interesting to go to Government with a nice long shopping list of data to be ready for us to delve into next year.

(I will be publishing a series of blogs focusing on GovHack, exploring opportunities and challenges that arise and consider while I work on the committee for the Geelong GovHack which runs 29-31 July 2016)

Image courtesy Catherine, licensed under Creative Commons

GovHack – How can we collect more data?

If we had all the cancer information from around the world, any keyboard warrior could wrangle the data and find helpful new discoveries. But we struggle to even complete a state-level database let alone a national or global one.

After being dazzled by the enormous amount of data already released by Government, you soon realise how much more you really need.

For starters, there are lots of paper records not even digital. This isn’t just a Government problem of course, many private organisations also grapple with managing unstructured written information on paper. But if Governments are still printing and storing paper in hard copy form; we further delay a fully open digital utopia. At the very least storing atomic data, separate to a merged and printed version enables future access, and stops the mindless discarding into the digital blackhole.

Then consider all the new types of data which could be collected. The routes that garbage delivery trucks and buses take and the economics of their operation. If we had such data streams, we could tell citizens if a bus is running ahead or behind. We could have GovHack participants calculate more efficient routes. Could buses collect rubbish? We need data to know. More data means more opportunities for solutions and improvement for all.

When you consider the colossal task ahead of Government, we must insist on changing culture so that data releases are considered a routine part of public service. And also make further data collection an objective, not a bonus extra. Until that happens, large banks of knowledge will remain locked up in fortresses of paper.

What do you think? Do you know of any forgotten archives of paper that would be useful for improving lives?

Participate in GovHack this year, play with the data we do have and continue the conversation with us.

(I will be publishing a series of blogs focusing on GovHack, exploring opportunities and challenges that arise and consider while I work on the committee for the Geelong GovHack which runs 29-31 July 2016)

Image courtesy Fryderyk Supinski licensed under Creative Commons

Why I want to leave C#

Startup performance is atrocious, critically, that slows down development. It’s slow to get the first page of a web application, navigating to whole new sections, and worst: initial Entity Framework LINQ queries.

It’s 2016, .Net is very mature but this problem persists. I love the C# language much more above Java, but when it comes to the crunch, the run-time performance is critical. Yes I was speaking of startup performance, but you encounter that in new areas of the software warming up and also when the AppPool is recycled (scheduled every 13 hours by default). Customers see that most, but it’s developers who must test and retest.

It wastes customers and developers time. Time means money but the hidden loss is focus. You finally get focused to work on a task, but then have to wait 30 seconds for an ASP.NET web page to load up so you can test something different. Even stopping your Debugging in VS can take 10s of seconds!

There are told ways to minimise such warmup problems, with native generation and EF query caching. Neither are a complete solution. And why workaround a problem not experienced in node.js and even PHP!

.Net and C# are primarily for business applications. So how important is it to optimise a loop over millions of records (for big data and science) over the user and developer experience of run and start with no delay?

Although I have been critical of Javascript as a language, recent optimisation are admirable. It has been optimised with priority for first-use speed, and critical sections are optimised as needed.

So unless Microsoft fix this problem once and for all, without requiring developers to coerce workarounds, they’re going to find long term dedicated coders such as myself shifting to Javascript, especially now that ECMAScript and TypeScript make Javascript infinitely more palateable.

I have already recently jettisoned EF in favour of a proprietary solution which I plan to open source. I also have plans for node.js and even my own IDE which I plan to lease. I’m even thinking of leaving the Managed world altogether – Heresy!

.Net has lots going for it, it’s mature and stable, but that’s not enough anymore. Can it be saved? I’m not sure.

Busted! Internet Community Caught Unprepared

Internet Security (TLS) is no longer safe. That green HTTPS word, the golden padlock, all lies. The beneficiaries: trusted third parties who charge for certificates. Yes, it sounds like a scam, but not one actively peddled, this one is from complacency from the people who oversee the standards of the internet. Is there bribery involved? Who knows.

A month ago, there were no problems with TLS. Because it was only the 6th of October when a paper was published which paves the way to build machines which can break TLS. Update: Now a whole Q-computer architecture has been designed publically (what has been done privately?), and can be built under $1B. These machines are called Quantum Computers. So where’s the scam?

The nerds behind the Internet, knew long ago about the threat of developing such a machine. They also knew that new standards and processes could be built unbreakable even by a Quantum Computer. But what did they do? They sat on their hands.

I predicted in 2010 that it would take 5 years before a Quantum Computer would be feasible. I wasn’t specific about a mass production date. I was only 4 months out. Now it’s feasible for all your internet traffic to be spied on, including passwords, if the spy has enough money and expertise. But that’s not the worst part.

Your internet communication last year may be deciphered also. In fact, all of your internet traffic of the past, that you thought was safe, could be revealed, if an adversary was able to store it.

I wrote to Verisign in 2010 and asked them what they were doing about the looming Internet Emergency, and they brushed my concern aside. True, users have been secure to date, but they knew it was only a Security Rush. Like living in the moment and getting drunk, not concerned about tomorrow’s hangover, users have been given snake oil, a solution that evaporates only years later.

All of these years, money could have been poured into accelerated research. And there are solutions today, but they’re not tested well enough. But the least that could be done is a doubling of security. Have both the tried and tested RSA, as well as a new theoretically unbreakable encryption, in tandem.

Why is there still no reaction to the current security crisis? There are solid solutions that could be enacted today.


The Fraying of Communication and a proposed solution: Bind

In medicine the misinterpretation of a doctors notes could be deadly. I propose that the ambiguity, of even broader discourse, has a serious and undiscovered impact. This problem needs to be researched, and will be expounded further but I would like to explore a solution, which I hope will further open your understanding of the problem.

As with all effective communication, I’m going to name this problem: Fraying. For a mnemonic, consider the ends of a frayed string being one of the many misinterpretations.

His lie was exposed, covered in mud, he had to get away from his unresponsive betraying friend: the quick brown fox jumped over the lazy dog.

That’s my quick attempt of an example where context can be lost. What did the writer mean? What can a reader or machine algorithm misinterpret it to mean? Even with the preceding context, the final sentence can actually still be interpreted many ways. It’s frayed in a moderate way with minor impact.

In this example, it would be possible for the author to simply expound further on that final sentence, but that could ruin the rhythm for the reader (of that story). Another method, is to add such text in parenthesis. Either way, it’s a lot of additional effort by multiple parties. And particularly in business, we strive to distill our messages to be short, sharp and to the point.

My answer of course is a software solution, but one where plain text is still handled and human readable. It’s a simple extensible scheme, and again I name it: Bind (going with a string theme).

The quick [fast speed] brown fox [animal] jumped [causing lift] over [above] the lazy dog [animal]

With this form, any software can present the data. One with understanding of the scheme, can remove the square brackets if there is no facility for an optimized viewing experience. For example:

The quick brown fox jumped over the lazy dog

(Try putting your mouse over the lighter coloured words)

Since the invention of the computer and keyboard, such feats have been possible, but not simply, and certainly not mainstream.

So it would be important to proliferate a Binding text editor which is capable of capturing the intent of the writer.

The benefits of Binding go beyond solving Fray. They add more context for disability accessibility (I would argue Bind is classed as an accessibility feature – for normative people), and depending on how many words are Bound, even assist with language translation.

Imagine Google Translate with a Binding text editor, the translations would be much more accurate. Imagine Google search, where you type “Leave” and hover over the word and select [Paid or unpaid time off work], leaving you less encumbered with irrelevant results.

Such input for search and translation need not wait for people to manually bind historical writing. Natural Language Processing can bear most of the burden and when reviewing results, a human can review the meaning the computer imputed, and edit as needed.

We just need to be able to properly capture our thoughts, and I’m sure we’ll get the hang of it.

Hey, by the way, please add your own narrative ideas for “the quick brown fox jumped over the lazy dog”, what other stories can that sentence tell?

Appendix – Further Draft Specification of Bind:

Trailer MetaData Option:

  • Benefit: the metadata is decoupled visually from the plain text. This makes viewing on systems without support for the Bind metadata still tolerable for users.
  • Format: [PlainText][8x Tabs][JSON Data]
  • Json Schema: { BindVersion: 1, Bindings: […], (IdentifierType: “Snomed”) }
  • Binding Schema: { WordNumber: X, Name: “Z”, Identifier: “Y”, Length: 1}
  • Word Number: Word index, when words are delimited by whitespace and punctuation is trimmed.

Mixed MetaData Option:

  • When multiple preceding words are covered by the Binding, a number of dash indicates how many more words are covered. Bind Text: “John Smith [-Name]” indicates the two words “John Smith” are a Name.
  • The identifiers specified in ontological databases such as Snomed, may be represented with a final dash and then the identifier. Bind Text: “John Smith [-Name-415]” indicates a word definition identifier of 415, which may have a description of “A person’s name”.
  • When a square bracket is intended by the author, output a double square bracket. Bind Text: “John Smith [-Name] [[#123456]]” renders plainly to “John Smith [#123456]”

Digital Things

The “Internet of Things” is now well and truly established as a mainstream buzzword. The reason for its success could be explored at length, however this term is becoming overused, just like “Cloud”. The term has come to mean many different things to different people in different situations. “Things” works well to describe technology reaching smaller items, but “Internet” is only a component of a broader field that we can call Digital Things.

This Digital Things revolution is largely driven by the recent accessibility of tools, such as Arduino, Raspberry Bi and more. Miniaturization of computing that stretches even the definition of embedded computing. Millions of people are holding such tools in their hands wondering what to do with them. They all experience unique problems, and we see some amazing ideas emerge from these masses.

In health, the quantified self may eventually see information flow over the internet, but that’s not what all the fuss is about. Rather, it’s about Information from Things. Measuring as much as we can, with new sensors being the enablers of new waves of information. We want to collect this information and analyse it. Connecting these devices to the internet is certainly useful to collect and analyse this information.

Then there are many applications for the Control of Things. Driverless cars are generally not internet connected, neither are vacuum robots, burger building machines, a novel 100k colour pen or many many more things. It would seem the of the term Internet of Things as inspiration limits the possibilities.

In the end, Digital Things is the most suitable term to describe what we are seeing happen today. We are taking things in our lives which normally require manual work, and using embedded electronics to solve problems, whether it be for information or control, the internet is not always necessary.

Lets build some more Digital Things.

Geelong has a clean slate

I hope you’re done. Q&A was your last chance to detox from any doom and gloom you had left.

The loss of jobs, particularly at Ford, is not a pleasant experience for retrenched workers, but there’s no changing the past. The fact is Geelong now has a clean slate to dream big, and driverless electric vehicles is a perfect fit for the future of manufacturing.

On Q&A last night, Richard Marles was spot on, describing the automotive industry as one of our most advanced in supporting technical innovation in Australia. But ironically, the industry together has missed the boat and was always on a trajectory with disaster.

I have been watching the industry, since 2010. I have observed the emerging phenomenon of the electric vehicle and the needful but lack of interest by our local automotive industry.  I have realised any automation is to be embraced despite the unpleasant short-term job losses. And still we’re about to miss a huge opportunity.

The public forum is full of emotion, desperation, finger pointing, and frankly ignorance.

Geelong, we have a clean slate.

Kindly watch this video,, it’s all Geelong needs to drop the past and grasp the future, share it with your friends and call up all the politicians you know. It’s been there the whole time, and this vision for Geelong is all we need to forget our sorrows. You won’t understand unless you see the video. We need to act now.

I have covered Electric Vehicles comprehensively in the past, but they’re today’s reality. We need to aim higher. Do Geelong even know anything about driver-less cars?

People are immediately cautious of change, which is why the technology needs to be tested and tested here in Geelong. This will be a great focal point for our retraining efforts. Imagine cheap transport and independence for the elderly and disabled. Cheaper, safer and faster deliveries. Reduced traffic congestion and elimination of traffic lights – no stopping! Cars that drop you off and pick you up will park out of town – what car parking problem? What will we do with all those empty car park spaces in the city? More green plants and al fresco dining?

But most importantly zero road fatalities. If this is the only reason, it’s all we need.

They are legal in California today. What stepping stones will we take to legalise fully driverless cars in Victoria? These massive technology companies will only move next to hospitable markets. Who is talking to Nissan and Tesla about building the next generation of electric driverless vehicles in Geelong? We have been given a clean slate, there are too many exciting opportunities around to waste any more time on self-pity!

Oh and trust me when I say, that’s just the tip of the iceburg – I’m not telling you everything, find out for yourself. Click all the links found in this article for a start, it’s what they’re for.

Hint: There’s more to come from me, including the idea to start a “Manufacturing as a Service” company for Automotive, just like Foxconn does for electronics in China, inviting the Ford/Alcoa workers, their investment, GRIIF investment, outside investors and Tesla. There’s lots more work to do, but it’ll be worth it.

Some more videos you should really watch:

Mining Space

It’s quite an aspirational idea – to even look at mining asteroids in space. It may feel like it’s something unreachable, something that’s always going to be put off to the future. But the creation of a new company Planetary Resources is real, with financial backers and with a significant amount of money behind them. We’re currently in transition. Government, and particularly the U.S. government is minimizing its operational capacity for space missions, while the commercial sector is being encouraged and growing. For example, Sir Richard Branson’s, Virgin Galactic, as well as other organisations are working toward real affordable (if you’re rich..) space tourism and by extension commoditisation of space access in general, bringing down prices and showing investors that space isn’t just for science anymore, you can make a profit.

I recently read a pessimistic article, one where the break-even price for space mining is in the hundreds of millions of dollars for a given mineral. One needs to be realistic, however in this article, I think the author is being way too dismissive. You see, there are many concepts in the pipeline which could significantly reduce the cost of earth-space transit. My most favored is the space elevator, where you don’t need a rocket to reach kilometers above the earth (although you would likely still need some sort of propulsion to accelerate to hold in orbit).

But as well as being across technology, a critic needs to also be open to other ideas. For example, why bring the minerals back to Earth? Why not attempt to create an extra-terrestrial market for the minerals? It may well cost much more to launch a large bulk of materials into orbit, than to extract materials from an asteroid (in the future). With space factories building cities in space.

Of course, I still think space mining is hopeful at best, let’s balance despair with hopeful ideas.

Enhanced by Zemanta

Civilisation Manual

Lakshadweep, comprising tiny low-lying islands...
Image via Wikipedia

What would happen if an asteroid struck our planet and left a handful of people to restart civilisation? Or if you and few people washed up on an uninhabited island with nothing but the shirt on your back? Many would picture building huts, scavenging for food, starting some basic crops if possible. But that would be it, the limit. You wouldn’t comprehend completely rebuilding civilisation and luxuries available as they are today. But I do, I’m curious, what would it take? If all you could take with you was a book, what would be written in that book, what does the Civilisation Manual say?

Whenever there is talk of civilisation it seems that all you hear is philosophy, but seldom the practicality of achieving it. I assert that the creation of such a Civilisation Manual would be a useful undertaking, not so much for its hypothetical uses, but rather for the ability to teach how modern economies work. I believe that such a book should be able to contain all, if not more, information taught to children in a school. Such a book might be very large.

There would also be additional questions to be said of the hypothetical end of the world scenario. How long would it take

LONDON, ENGLAND - FEBRUARY 21: The sign for t...
Image by Getty Images via @daylife

to rebuild a civilisation to current day technology? What tools would most quickly speed up the process? Is there a minimum amount of people required for this to work? What level of intelligence is required to execute? Just one genius? How long until the female primeval desire for shopping is satisfied? And the perfect shoe manufactured?

Encyclopaedia Beliana 1
Image via Wikipedia

I would love to see a community website started to collect such information. We already have Wikipedia, but you are not told the intimate detail of how to find iron ore, how to cast iron, how to produce flour from wheat or how to build a crude resistor or capacitor to help you make more refined components. It is this knowledge which is hard to find, perhaps we are forgetting how we build a digital civilisation.

Also, given the opportunity to build a civilisation from scratch, there may be some interesting ideas which could be included, never encountered in history before. For example, the book could focus on automation, relieving the humans from hard and repetitive tasks. This could go even further than what is achieved today. In 10 years, perhaps robots will be washing and ironing clothes, cooking meals, etc..

What a Civilisation Manual should NOT contain:

  • Advertising
  • References to Gilligan’s Island
  • Everything – put in the most useful and if you have time add more.

What a Civilisation Manual should contain:

  • Very brief justifications of suggestions – it’s not a history book, it’s a survival book. It’s good to reassure the reader of the thought which goes into each of the suggestions in the book. Such as, if X happens to a person, cut their leg off. Briefly describing blood poisoning might be more reassuring.
  • Tried and tested procedures and instructions – can a 10-year-old kid work it out, or does it require an academic professor? and do you replace the palm frond roof monthly or yearly?
  • Many appendices:
    • A roadmap to digital civilisation – showing a tree of pre-requisite steps and sections on achieving each of the steps.
    • Recipes – Particularly useful when all you’ve got is coconuts and fish. How do you clean a fish?
    • Inter-language Dictionary – who knows who you’ll be with.
    • Plant Encyclopaedia – Identification of and uses for plants.
    • Animal  Encyclopaedia – Do I cuddle the bear?
    • Health Encyclopaedia – How do I deliver the baby?

And an example of chapters:

  • Atomic coffee maker designed by Giordano Robbiati
    Image via Wikipedia

    Something like “Don’t panic, breathe… you took the right book, in 5 years you’ll have a coffee machine again”

  • Chapter 1: Basic Needs – You’ll find out about these first, food, water, shelter.
  • Chapter 2: Politics and Planning – Several solutions for governing the group should be provided to choose from, a bit like a glossy political catalogue. It won’t contain things like Dictatorship, Monarchy. More like Set Leader, Rotating Leader or The Civilisation Manual is our leader. Planning will mostly be pre-worked in the appendix, where technology succession is described with expected timelines for each item.
  • Chapter 3: Power  – No not electricity, power. This section explains its importance and how to harness power, from wind/water for milling to animals for plowing. Of course the progression of civilisation would eventually lead to electricity.
The book should also contain several pencils, many blank pages and maybe we could sneak it a razor blade. This doesn’t break the rules of only being allowed to have a book. Publishers are always including CD’s and bookmarks…
I think it would be interesting anyway…
Enhanced by Zemanta

Robots – the working class

Rage Against the Machine
Image via Wikipedia

I have found myself considering whether doom would really befall the world if we mass employed robots to do all of our dirty work. Would we be overrun by machines which rose up and challenged their creators? Would our environment be destroyed and over polluted? I think not. In fact our lives would be much more comfortable and we would have a lot more time.

Life on earth got a lot better around the 1800s, the dawn of the industrial age. In the two centuries following 1800, the world’s average per capita income increased over 10-fold, while the world’s population increased over 6-fold [see Industrial Revolution]. Essentially, machines, aka. very simplistic robots made human lives much better. With steam power and improved iron production, the world began to see a proliferation of machines which could make fabrics, work mines, machine tools, increase production of consumables, enable and speed up the construction of key infrastructure. Importantly, it is from the industrial revolution from which the term Luddite originated, those who resisted machines because their jobs were offset.

We now find ourselves 200 or so years later, many of us in very comfortable homes, with plenty of time to pursue hobbies and leisure. There does however, remain scope for continued development, allowing machines and robots to continue to improve the lives of people. It is understood that one or more patents actually delayed the beginning of the industrial age, and of course is why I advocate the Technology Development Zones which have relaxed rules regarding patents. However, I believe there is a very entrenched Luddite culture embedded into society.

Now being the organiser of the campaign, I have been accused of being a Luddite myself. However no progress has lasted without a sound business case. Furthermore, Luddites of the industrial revolution were specifically those put out of business by the machines.

Therefore the current Luddites are currently or potentially:

  • The Automotive Industry status quo. – Movement to Electric Cars will make hundreds of thousands redundant. Consider how simple an electric car is {Battery, Controller, Motor, Chassis, Wheels, Steering}, and how complicated combustion engines are with the addition and weight of the radiator, engine block, oil, timing, computer,… And all the component manufacturers, fitters, mechanics and further supporting industries that will be put out of business.
  • The Oil industry (and LN2) – Somewhat linked to the Automotive industry. Energy could very well be transmitted through a single distribution system – electricity – at the speed of light. No more oil tankers, no more service stations, no more oil refineries, no more oil pipelines, no more oil mining, no more petrol trucks, no more oil spills. (The replacement for oil needs to be as economical or more economical – no ideologies here).
  • Transport industry – Buses, Trains, Trucks, Taxis, Sea Freight and even air travel all currently employ many thousands to sit in a seat and navigate their vehicle. Technology exists to take over and do an even better job. It’s not just the safety concerns delaying such a transition but also the Luddites (and patent squatters).
  • Farming – The technology is possible. We could have economical fruit picking machines, and many mega farm operations already have automatic harvesters for grain. Imagine all those rural towns having already under threat of becoming ghost towns having to contend with technology taking replacing hard workers.
  • Manufacturing – Is already very efficient, but we still see thousands of people on production lines simply pressing a button. Most manufacturing jobs could be obliterated with only one or two required to overlook a factory – how lonely.
  • House Wifes – Are possibly not Luddites, given many would relish even more time for leisure and their family, however so many of their tasks could be completely centralised and automated. Cooking and associated appliances could be completely abolished, why buy an oven, dishwasher, sink, fridge, freezer, cupboards, dinnerware, pots, pans, stove, and then spend 1-2 hours a day in the kitchen and supermarket when you could potentially order your daily meals from an industrial kitchen where all meals are prepared by robots for a fraction of the cost and time?
  • Construction – It’s amazing how many people it takes to build a skyscraper or house. Why does it still require people to actually build them? Why can’t houses be mass pre-fabricated by machines in factories then assembled by robots on-site? How many jobs would be lost as a result?
  • Services sector – There are many services sector jobs where software and robots could easily be designed and built to relieve such workers from their daily tasks. Accounting could be streamlined such that all business and personal finances are managed by software completely, with robots now aiding in surgery why can’t robots actually perform the surgery or give a massage, or pull a tooth? Why are there so many public servants dealing with questions and answers and data-entry when we have technology such as that found in WATSON able to take over such tasks? Even many general practitioners are resisting the power available for self-diagnosis – do you think they’ll fund the further development of such tools?
  • Mining – Is as crude as grain farming and could easily be further automated, making thousands and thousands redundant in mines, and even those surveying future mining sites.
  • Education – How important is it to have children learn as much as possible while they’re young (beyond simple skills such as reading, writing and arithmetic), when the whole world could be run by software and robots? When complicated questions can be answered by a computer instead of a professor? Why lock children behind desks for 20 hours a week when they could be out playing?
  • Bureaucracy – With no workers there would be no unions and no union bosses, no minimum wage, no work safety inspector…
  • Military – (Ignoring the ideology of world peace) We already see the success of the UAV, an aircraft which flies autonomously only requiring higher lever command inputs for it’s mission. Why enhance soldiers when you can have robot soldiers? War could even be waged without blood, with the winner having enough fire-power at the end to force the loser to surrender outright (quite ridiculous in reality – I know).
  • Care – There are many employed to look after sick and elderly. Even though the work can be challenging and the pay often low it’s still a job, a job that robots can potentially do instead.
With time such a list could easily be expanded to encompass everyone. Are we all collectively resisting change?
With a world full of robots and software doing everything, what do humans do with 100% unemployment? Do we all dutifully submit our resumes to Robot Inc three times a week? Would we all get on each others nerves? Do we need to work? Would we lose all purpose? Ambition? Dreams?
To best understand how a robot utopia works, just simplify the equation to one person – yourself on an island. You could work everyday of your life to make sure you have enough water, food and shelter or if you arrived on the island with a sufficient compliment of robots you could enjoy being stranded in paradise. Every step in between from doing everything yourself toward doing nothing yourself, sees your level of luxury increasing.
There’s no doubt that the world will be divided into two classes, those that are human and have a holiday everyday, and those that are robots – the working class.
Enhanced by Zemanta

Revisiting DIDO Wireless

A wireless icon
Image via Wikipedia

I’ve had some time to think about the DIDO wireless idea, and still think it has a very important part to play in the future – assuming the trial conducted of 10 user nodes is truthful. Before I explore the commercial benefits of this idea, I will first revisit the criticisms as some have merit, and will help scope a realistic business case.



  • One antenna per concurrent node – The trial used 10 antenna for 10 user nodes. Each antenna needs a fixed line or directional wireless backlink – this would imply poor scalability of infrastructure. [Update: This is likely so, but Artemis claim the placement of each antenna can be random – whatever is convienient]
  • Scalability of DIDO – We are told of scaling up to 100s of antenna in a given zone. I question the complexity of the calculations for spatial dependent coherence, I believe the complexity is exponential rather than linear or logarithmic. [Update: Artemis pCell website now claims it scales linearly]
  • Scalability of DIDO controller – Given the interdependence on signals, is the processing parellelisable? If not this also limits the scale of deployment. [Update: Artemis claim it scales linearly]
  • Shannon’s Law not broken – The creators claim breaking the Shannon’s law barrier. This appears to be hyperbole. They are not increasing the spectrum efficiency, rather they are eliminating channel sharing. The performance claims are likely spot on, but invoking “Shannon’s Law” was likely purely undertaken to generate hype. Which is actually needed in the end, to get enough exposure for such a revolutionary concept.


Discussion surrounding neutralised claims which may be reignited, but are not considered weaknesses or strengths at this point in time.

  • Backhaul – Even though the antenna appear to require dispersed positioning, I don’t believe that backhaul requirements to the central DIDO controller need to be considered a problem. They could be fixed line or directional wireless (point to point). [Update: This is not really a problem. Fibre is really cheap to lay in the end for backhaul, it’s most expensive for last-mile. Many Telcos have lots of dark fibre, not being used and Artemis is partnering with Telcos, rather than trying to compete with them]
  • DIDO Cloud Data Centre – I take this as marketing hyperbole. Realistically a DIDO system needs a local controller, all other layers above such a system are distractions from the raw technology in question. And as such, the communication links between the local controller and antenna need not be IP transport layer links, but would rather be link layer or even physical layer links.
  • Unlimited number of users – Appears to also be hyperbole, there is no technological explanation for such a sensational claim. We can hope, but not place as Pro until further information is provided. [Update: It does scale linearly, so this is a fair claim when compared to current Cell topology or if pCell was was limited to exponential processing load]
  • Moving User Nodes – Some may claim that a moving node would severely limit the performance of the system. However this pessimistically assumes a central serial CPU based system controls the system (a by-product of Reardens “Data Centre” claims). In reality I believe it’s possible for a sub-system to maintain a matrix of parameters for the main system to encode a given stream of data. And all systems may be optimised with ASIC implementation. Leaving this as a neutral but noteworthy point.
  • Size of Area of Coherence – Some may claim a problem with more than 1 person in an area of coherence, assumed to be around one half wavelength. How many people do you have 16cm away from you (900Mhz)? Ever noticed high density urbanisation in the country? (10-30Mhz for ionosphere reflection – <15M half wavelength) [Update: demonstrations have shown devices as close as 1cm away from each other – frequency may still be a limiting factor of course, but that is a good result]
  • DIDO is MIMO – No it’s very similar, but not the same and is likely inspired by MIMO. Generally MIMO is employed to reduce error, noise, multipath fading. DIDO is used to eliminate channel sharing. Two very different effects. MIMO Precoding creates higher signal power at a given node – this is not DIDO. MIMO Spatial multiplexing requires multiple antenna on both the transmitter and receiver, sending a larger bandwidth channel via several lower bandwidth channels – DIDO nodes only need one antenna – this is not DIDO. MIMO Diversity Coding is what it sounds like, diversifying the same information over different antenna to overcome wireless communication issues – this is not DIDO. [Update: Artemis and the industry and now standardising calling it a C-RAN technology]
  • 1000x Improvement – Would this require 1000 antenna? Is this an advantage given the amount of antenna required? MIMO is noted to choke with higher concurrency of uses. Current MIMO systems with 4 antenna can provide up to 4x improvement – such as in HSPDA+. Is MIMO limited in the order of 10s of antenna? Many many questions… [Update: This is likely so, but Artemis claim the placement of each antenna can be random – whatever is convenient]


  • Contention – Once a user is connected to a DIDO channel, there is no contention for the channel and therefore improved latency and bandwidth.
  • Latency – Is a very important metric, perhaps as important as bandwidth. Latency is often a barrier to many innovations. Remember that light propagates through optical fibre at two-thirds the speed of light.
  • Coverage – It seems that DIDO will achieve coverage and field less black spots than what is achievable with even cellular femtocell. Using new whitespace spectrum, rural application of pCell would be very efficient, and if rebounding off the Ionosphere is still feasible, the answer to high speed, high coverage rural internet.
  • Distance – DIDO didn’t enable ionosphere radio communications, but it does make ionosphere high bandwidth data communication possible. Elimination of inter-cell interference and channel sharing make this very workable.
  • Physical Privacy – The area of coherence represents the only physical place the information intended for the user can be received and sent from. There would be potential attacks on this physical characteristic, by placing receivers adjacent to each DIDO antenna, and mathematically coalescing their signals for a given position. Of course encryption can still be layered over the top.
  • Bandwidth – The most obvious, but perhaps not the most important.
  • [New] Backward Compatibility – Works with existing LTE hardware in phones. Works better if using a native pCell modem with better latency performance particularly. Seamless handoff to cell networks, so it can co-operate.
  • [New] Wireless Power – Akbars (See Update below) suggested this technique could be used for very effective Wireless Power, working over much larger distances than current technology. This is huge!

Novel Strength

This strength needed particular attention.

  • Upstream Contention Scheduling – The name of this point can change if I find or hear of a better one. (TODO…)

Real World Problems

Unworkable Internet-Boost Solutions

I remember reading of a breakthrough where MEMS directional wireless was being considered as an internet boost. One would have a traditional internet connection and when downloading a large file or movie, the information would be sufficiently cached in a localised base station (to accommodate a slow backlink or source) and then forwarded to the user as quickly as possible. This burst would greatly improve download times and a single super speed directional system would be enough to service thousands of users given its’ extreme speed and consumers limited need for large transfers. Of course even such a directional solution is limited to line of sight, perhaps it would need to be mounted on a stationary blimp above a city…

Mobile Call Drop-outs

How often do you find yourself calling back someone because your call drops out? Perhaps it doesn’t happen to you often because you’re in a particularly good coverage area, but it does happen to many people all the time. The productivity loss and frustration is a real problem which needs a real solution.

Rural Service

It is very economical to provide high-speed communication to many customers in a small area, however when talking of rural customers the equations are reversed. Satellite communication is the preferred technology of choice, but it is considerably more expensive, is generally a lower bandwidth solution and subject to poor latency.

Real World Applications

The anticipated shortcomings of DIDO technology need not be considered as deal breakers for the technology. The technology still has potential to address real world problems. Primarily we must not forget the importance/dominence of wireless communications.

Application 1: A system could be built such that there may be 10 areas of coherence (or more), and can be used to boost current technology internet connections. One could use a modest speed ADSL2+ service of 5Mbps and easily browse the bulk of internet media {Text, Pictures} and then still download a feature-length movie at gigabit speeds when downloaded. This is a solution for the masses.

Application 2: DIDO allows one spectrum to be shared without contention, but that spectrum need not be a single large allocation of spectrum, it could mean a small (say 512Kbps) but super low latency connection. In a 10 antenna system, with 20Mhz of spectrum and LTE-like efficiency this could mean 6000 concurrent active areas of coherence. So it would enable very good quality mobile communication, with super low latency and practically no black-spots. It would also enable very effective video conferencing. All without cellular borders.

Applications 3 and 4: The same as Applications 1 and 2, but using a long-range ionosphere rural configuration.


We still don’t know too much about DIDO, the inventors have surrounded their idea with much marketing hype. People are entitled to be cautious, our history is littered with many shams and hoaxes, and as it stands the technology appears to have real limitations. But this doesn’t exclude the technology from the possibility of improving communication in the real world. We just need to see Rearden focus on finding a real world market for its’ technology.


  • [2017-01-10] Finally, the hint text has dissappeared completely, to be replaced with
    • “supports a different protocol to each device in the same spectrum concurrently” – following up on their last update
    • “support multiple current and future protocols at once.” – this is a great new insight. They have right up top, that pCell supports 5G, and future standards. So without considering the increased capacity, customers don’t need to keep redeploying new hardware into the field.
    • “In the future the same pWave Minis will also support IoT” – there are standards floating around, and what better way to implement security for IoT, than physically isolated wireless coherence zones, and perhaps very simplistic modulation.
    • “precise 3D positioning” – This confirms one of my predictions, pCell can supercharge the coming autopilot revolution
    • “and wireless power protocols” – as I always suspected. However, it still seems impractical. This is likely just a candy-bar/hype statement.
    • “Or in any band from 600 MHz to 6 GHz” – it’s interesting to learn this specification – the limits of typical operation of pCell. I note they have completely abandoned long-wave spectrum (for now at least).
    • “pWave radios can be deployed wherever cables can be deployed” – I still think fibre/coax is going to be necessary, wireless backhaul is unlikely to be scalable enough.
    • “Typically permit-free” – does this refer to the wireless signal I wonder? Very interesting if so. It could also refer carrier licensing, because you’re only carrying data, information is only deduced back at the data centre.
    • “can be daisy-chained into cables that look just like cable TV cables” (from Whitepaper) – so perhaps long segments of coax are permitted to a base-station, but that base-station would likely require fibre out.
    • “pCell technology is far less expensive to deploy or operate than conventional LTE technology” – they are pivoting away from their higher-capacity message, now trying to compete directly against Ericson, Huawei, and others.
  • [2016-02-25] pCell will unlock ALL spectrum for mobile wireless. No more spectrum reservations. pCell could open up the FULL wireless spectrum for everyone! I hope you can grasp the potential there. Yesterday I read a new section on their website: “pCell isn’t just LTE”. Each pCell can use a different frequency and wireless protocol. This means you can have an emergency communication and internet both using 600Mhz at the same time meters away! In 10 years, I can see the wireless reservations being removed, and we’ll have up to TERABITS per second of bandwidth available per person. I’m glad they thought of it, but this is going to be the most amazing technology revolution of this decade, and will make fibre to the home redundant.
  • [2015-10-03] It’s interesting that you can’t find Hint 1 on the Artemis site, even when looking back in history (Google), in fact the date of 2015-02-19 it reads “Feb 19, 2014 – {Hint 2: a pCell…”, which is strange given my last update date below. Anyway the newest Hint may reveal the surprise:
    • “Massless” – Goes anywhere with ease
    • “Mobile” – outside your home
    • “Self-Powered” – either Wireless Power (unlikely) or to wit that this pCell is like some sort of Sci-Fi vortex that persists without power from the user.
    • “Secure” – good for privacy conscious and/or business/government
    • “Supercomputing Instance” – I think this is the real clue, especially given Perlman’s history with a Cloud Gaming startup previously.
    • My best guesses at this stage in order of likelihood:
      • It’s pCell VR – already found in their documentation, and they just haven’t updated their homepage. VR leverages the positioning information from the pCell VRI (virtual radio instance) to help a VR platform both with orientation as well as rendering.
      • Car Assist – Picks up on “Secure” and the positioning information specified for VR. VR is an application of pCell to a growing market. Driverless is another growing market likely on their radar. Driverless cars have most trouble navigating in built up, busy environments and particularly round abouts. If pCell can help in any way, it’s by adding a extra absolute position information source this cannot be jammed. Of course the car could also gain great internet connectivity too, as well as tracking multiple vehicles centrally for more centralised coordination.
      • Broader thin-client computing, being beyond “just communications”, although one can argue against that – pCell is communications an enabler. This would include business and gaming.
      • Emergency Response. Even without subscription it would be feasible to track non-subscribers location.
  • [2015-02-19] Read this article for some quality analysis of the technology – [Archive Link] – Old broken link:
  • [2015-02-19] Artemis have on their website – “Stay tuned. We’ve only scratched the surface of a new era.…{Hint: pCell technology isn’t limited to just communications}’ – I’m gunning that this will be the Wireless Power which Akbars suggested in his blog article. [Update 2015-10-03 which could be great for electric cars, although efficiency would still be quite low]
  • [2016-06-02] Technical video from CTO of Artemis –
    • Better coverage – higher density of access points = less weak or blackspots
    • When there are more antenna than active users, quality may be enhanced
    • Typical internet usage is conducive for minimising number antenna for an area
    • pCell is not Massive MIMO
    • pCell is Multi User Spatial Processing – perhaps MU-MIMO [see Caire’03, Viswanath’03, Yu’04]
    • According to mathematical modelling, densely packed MIMO antenna cause a large radius of coherent volume. Distributed antenna minimises the radius of coherent volume. Which is intuitive.
    • see 4:56 – for a 3D visulasation of 10 coherent volumes [spatial channels with 16 antennas. Antenna are 50m away from users – quite realistic. Targetting 5dB sinr.
    • pCell Data Centre does most of the work – Fibre is pictured arriving at all pCell distribution sites.
    • 1mW power for pCell, compared to 100mW for WiFi. @ 25:20
Enhanced by Zemanta

Phishing Drill – Find your gullible users

Do you remember participating in fire drills in school? I remember them fondly – less school work for the day. I also remember earthquake drills when I went to school in Vancouver for a year. So what to drills do? They educate us about the signs and signals to look out for, and then how to react. I believe spam filters work fairly well (that was a sudden change of subject). I use gmail and spam detection is built-in, however I still do receive the occasional spam message. Education of those who fall for spam and phishing is an important factor in reducing associated problems and scams. If all internet users had their wits about them, we could put spammers and phishers out of the business – and most door to door salesmen. So how do we achieve this without million dollar advertising campaigns?…. Drills. Spam/Phishing Drills, or to be more generic, perhaps Internet Gullability Drills (IGD – everyone loves an initialism).

How do you drill the whole of the Internet? “Attention Internet, we will be running a drill at 13:00 UTC”…. probably definitely not. My proposed method involves every web application, which liaises with their customers by email or is at risk of being spoofed in a phishing scam, to have their own private drills. Such a drill would involve sending out an email message which resembles a real life phishing/spam email. Each time different variables could be used – email structure, sender email, recipients name, a direct link to a spoof site. In any case the drill should be able to detect those who fall for the drill. They can then be notified of their stupidity in the matter in a more delicate way than most would – “Haha – you just fell for our IGD you loser!”, is way off.

Ultimately a Gullability prevention centre website would exist which the users could be referred to, so they may refresh themselves in current threats, how to identify them and how to react. Quite a simple solution, and maybe I’m not the first one to think about it, I didn’t bother searching the Internet for a similar idea…


Creativity. Just Pulleys and Levers.

Growing up as a kid, I was captivated by magic tricks and wanted to know how they were done. Pulling a rabbit out of a hat, the slight of hand, the magnets, the hidden cavity. They would have you believe that they were achieving something beyond the physical laws, that they had a supernatural power. TV shows and literature thrive in unveiling the surprising simple process of even the most elaborate illusions.

Creativity is the last remaining magic trick.

Western culture goes to great lengths to idolize and mystify it. “It’s a gift”, “It’s a talent”, “They must use the right side of their brain”. Paintings and artworks are highly prized, some running into the millions of dollars. The creative process in the mind seems elusive and magic. Society seems to think that creativity is only for a select few. The fanfare and mystique of creativity, adds to the performance.

They’re wrong.

Creativity is a simple process of random noise and judgement, two very tangible logical concepts. It’s a process. Like a magician’s rabbit in a hat. This doesn’t take away from the impact of the product of creativity, but it does dispell the super human status of the skill.

Small Things

Creativity doesn’t just happen once in an artwork, it happens multiple times at different levels, in small amounts, but always with the same components of random noise and judgment.

A painter may start with a blank canvas and no idea of what they will paint. They then recall memories, images and emotions which all feed as both random noise and experience for judgement. They then choose a scene, the first round of creativity has occurred.

The painter will not recall perfectly all the details of the scene, but will have to choose how the scene would be composed. In their mind they imagine the horizon, the trees, perhaps a rock, or a stream, each time picturing in their minds different locations and shapes and judging aesthetic suitability. Another round of creativity has occurred, with many more elements of creation. Once painting with a brush in their hand, a painter may think ahead of the texture of the rock, the direction of the stream, the type of tree, the angle and amount of branches, the amount of leaves, and the colours.


They may stand back and look at what they have painted and decided to change an element. In this case, their single painting is one possibility of randomization and they have judged it to be substandard. They then picture other random forms and corrections and judge the most appropriate course of action.

That whole process is the sum of smaller decisions, with good judgement and a flow of random ideas.

Small things everywhere

This is transferable to music composing. Instead of visualising, like the painter, they play different melodies in their mind. Many musicians fluke a new melody. They make a mistake on their instrument or purposefully allow themselves to play random notes. With judgement, they select appropriate phrases.

It also works for the lyrics for a song. Lyricists, have a sea of words and language moving through their mind, and often randomise. How many words go through your head when you’re trying to find a word that rhymes? With good judgement and some planning the final set of lyrics, can inspire. But there are plenty of draft pieces of paper in the bin.

The end products from creativity can be very impressive, but an artist won’t discount their work as being merely time and small things. There is one exception though. Vincent Van Gogh famously said, “Great things are done by a series of small things brought together”.

Design vs Performance

At this point, it’s very important to comprehend two components of art. Design and Performance. Once a painting has been designed it’s easy to reproduce – or perform. Now, the painter may have refined their design through performance, however they are left with a blueprint at the end for reproduction. Music is constructed in the same way, and is easily reproduced by many musicians. Lyrics can be recited or sung to music by a performer.

So what part of art is actually creative? Often the performance is almost a robotic function. Jazz is combines performance and design at the same time. It’s the design, the improvisation that supplies the creative credential. Design is the crucial creative element. A painter creating the correct strokes on a canvas is simply a well practiced performance.

Random is inspiration

Randomisation can be, and is most often external. Anything we can receive at a low level through our five senses and at a higher level through those senses, such as emotion. An executive is often presented with several options, and uses judgement to select the most appropriate. They are not producing a painting or song, however their process is still creativity – to society a rather boring form of creativity. Software development is considered a very logical process, however the end product is legally considered copyrighted literature. How could something so logical be attributed a magic like status? This always conflicted in my mind before, however understanding creativity as noise and judgement in design and performance cycles, helped to rationalise creativity back to the mortal domain, and consequently allow myself to understand why software design is art.


I expect any artist who reads this article, to be beside themselves – “software isn’t art!”. But it’s the same as uncovering the secret of a magicians trick. Artists are rightly protecting their trade secret, which doesn’t bother me. I like the occasional magic show.


An expanded creativity formula:

R = Randomisation
J = Judgement
C = Creativity

C = J(R) – “Creativity is a function of Judgement of Randomisation”, as described above.

A break down of the formula’s components – and further insight of my perceptions of the lower level concepts – (more for myself to map it out)

E = Experience
A = Article to be judged – Perceptions though senses and feelings

J = F(A,E,K) – Judgement is a function of Knowledge, Experience against an Article to be judged

M = Memory
SFJ = Senses and Feelings and Past Judgement

E = M(SFJ) – Experience is a class of Memory, that of senses, feelings and past judgement


IPTV – How to conquer the livingroom

It’s embarrassing watching the video entertainment products coming out at the moment. They’re all trying to come up with the winning combination, and no one is succeeding – even Apple failed with their Apple TV product. The problem is that their trying to invent some expensive lounge room swiss army knife, when what customers need is simplicity. They are failing to see the primary barrier – no one has IP enabled TVs.

Here’s my forumula to conquer the livingroom:

  1. All new TVs should be IPTV enabled with a gigabit ethernet port – this may include an OnScreen display to surf the web etc., but basically it should support “Push IPTV”
  2. IPTV Adaptor – Develop a low cost IPTV to TV device – which simply supports “Push IPTV”. Eg. Converts Packets into an HDMI signal.
    • I want a company to develop an ASIC
    • It accepts converts streamed video content (of the popular formats)
    • The chip supports outputs into HDMI,  Component, S-Video or Composite
    • The chip is implemented into 4 different products: IP-HDMI, IP-Component,IP-S-Video, IP-Composite

With that barrier clear, you don’t need to fork out to buy another gadget for you living room, you simply leverage your PC or laptop, pushing streaming video to any display in your home. When you connect your IPTV Adaptor to the network, it announces its self and all media devices and media software can then push streaming video to that display.

So now you can use your Laptop / iPad as a remote. You drag your show onto your lounge room and away you go! While everyone is watching on the TV, you can see thumbnail previews of other IPTV shows currently showing – so your channel surfing doesn’t annoy everyone else 🙂

The Web Security Emergency

We responsible users of the internet have always been wary when surfing the Web. We know that we need to make sure websites use TLS security, we need to see HTTPS and a tick next to the certificate to ensure no one is eaves dropping on information being transmitted.

How wrong we are.

The security industry has long known the weakness of RSA and ECC  – the major cryptography used on the internet –  as well as other asymmetric cryptography algorithms, against a quantum computer. And they have done little, to prepare for the advent of the first quantum computer, because it has always been a futuristic dream. But this position is quickly becoming antiquated, there have been many developments in the last few years which now have scientists projecting the first quantum computer to arrive within 5 years. 5 years isn’t that far away when you consider that your sensitive data could be being recorded by anyone today or even in the past, with a hope to decrypt it in 5 years!

There are people who think that Quantum computers will never come, but they are just burying their heads in the sand. Researchers have already developed one which implements Shor’s algorithm – the one which breaks RSA and ECC – on a chip!

So what is the security industry doing about it now? The threat won’t arrive in 5 years, the internet is insecure today. People are carrying out bank transactions today, believing that the data being transmitted will never be read by an unauthorized third party. Programs and drivers are signed with algorithms which will be broken in 5 years, what will stop malware then? There are also anonymous systems such as Tor and I2P which likely use RSA as the basis for their security, in 5 years how many citizens in politically oppressed countries will get the death penalty?

Fortunately there are asymmetric cryptography algorithms which are not known to be breakable by quantum computers, but these have not been standardised or fully researched yet. These can be found at So what it comes down to is, that the security industry doesn’t have the answer, and that’s the reason they are not telling anyone of the problem, they’re effectively covering up the truth.


I’ve seen a lot of rapid developments recently, I’m still optimistic about an RSA breaking quantum computer within 5 years (from June 3, 2010)


The commercially available D-Wave (Quantum Annealling) can factorise numbers, according to some of their marketing, and this stackexchange question. The StackExchange question also describes the currently perceived limits of D-Wave or Quantum Annealling in general, estimating that N^2 qubits are required for an N bit prime. The current DWave is only 512 bits.

If the amount of bits were to double annually, then 1024 bit SSL encryption would potentially be easily cracked by such a device in 11 years.

However, this is what is commercially available. Given enough money it would be conceivable that a Goverment / Military could possess one now. Maybe even the NSA.


D-Wave cannot break today’s SSL web encryption:

The optimizer they now claim to have is restricted to problems that can be mapped to an Ising model—in other words, the computer is not universal. (This precludes Shor’s algorithm, which factors integers on a quantum computer.)


I’ve got less than a year left on my 5 year prediction, but I have finally found a scientist themselves make a prediction, it would not be unreasonable to think US DoD could have this already, or within a year, but it would be most practical to simply say I was possibly out by 5 years. So effectively the warning starts today!

They hold out the possibility of a quantum computer being built in the next five to 15 years.


UPDATE [2015-09-30]:

Even the NSA are worried about the post-quantum computing world, see:

UPDATE [2015-10-14]:

Maybe my prediction was right (only out by 4 months):

Apparently it is feasible to build a quantum computer today. One that can defeat all encryption used in internet communication today (as long as that data is wire tapped and stored). Although it may take 5 years for mass scale commercialization, I’m sure NSA, FBI and DOD of the USA would be capable of building a quantum computer now, if they didn’t already have one.

The breakthrough by UNSW, could very well have been discovered earlier in secret. So this has implications for international espionage today, broader law enforcement in years, and the whole underpinning of the internet security in 5 years.

Using WiFi and searching Google via HTTPS? In 5 years, the owner of the Access Point could very likely decrypt your searches, and other information including bank passwords.

The only secure encryption today requires a password to be entered on each end of the communication channel.

Further Reading,toshiba-invention-brings-quantum-computing-closer.aspx

Super city: Pushing the technology boundaries

In the last article I discussed the concept of Technology Development Zones. This concept can be taken all the way with what we can call a super city. I started with this idea after thinking, what could I do with $1bn. After finishing with dreams of a house on the moon or a medieval castle in the mountains, I started jotting down some points.

Why can’t we start building an entirely new, entirely futuristic city? When you start from scratch, you can benefit from having no boundaries.

Australia so happens to be the perfect place for such an idea. A good economy. A housing shortage.

The Detail

I’ll try to keep it short

  • The city is a sky scraper – providing spectacular views for all residents. ie. 500m high, 500m wide, 40m deep, accommodating a little less than 50,000 people.
    • This reduces the human footprint, with all services contained within a single building. The only reason for people to leave the building is for recreation and farming.
  • It’s located at least 300km from Melbourne – reducing city sprawl
  • But it’ll only take you 30mins to travel 300km in any direction – see Transport below
  • Implements a “Base Luxury Standard”. A body corporate scheme, to operate on economies of scale.
    • Logistics – Cater for all logistics problems in one solution – Let’s call it a Transporter
      • A 3D “elevator” system
      • Elevator capsules which can carry up to 10 people and a few tonne
      • Can travel up/down, left/right, and back/forth
      • EG. Move from the first floor at the front of the building in the middle laterally, to the top floor at the back of the building on the left without “changing elevators”
      • Transporter capsules travel laterally along what would normally be the hallway for walking to your apartment
        • When travelling laterally to an apartment, the transporter doors and apartment doors open together
        • In an emergency, the apartment doors can be manually opened and occupants can walk down the lateral transporter shaft
          • Manual overrides are detected by the system and transporters for the entire floor are speed reduced and obstacle detection is activated to avoid collision with people.
      • Keep in mind that in an emergency, transporters should still be operational laterally, as there is no danger of dropping.
      • Transporters are not just used to transport people but also:
        • Food – Washable containers, transport prepared food, cutlery etc.. from kitchens, used containers are returned to be washed.
        • Heating / Cooling – Heat bricks or molten salts and LN2 packs for refrigeration, air conditioning and heating
          • No pipes = less cost, no maintenance
        • Water – A set of dedicated water transporters are used to fill small reservoir in each apartment
          • No pipes = less cost, no maintenance
          • Bathroom and commercial facilities do have pipes
        • General Deliveries – Furniture, clothing, presents, mail, dirty/washed clothes etc…
        • [Not Data] – That’s fixed line or radio wireless, can’t just transport hard disks, latency is much too slow 🙂
    • Food (Diet) – Set base cost for food every week which is pooled and food providers are then paid for. To start off with, fully automated systems are desirable to peel, slice, etc.., it’s possible to have a fully automated catering system which deals with 80% of meals. The final 20% is catered for by Chefs who still use machines for preprocessing – and are an additional cost. Eg. $5 / person per day for any basic meal and additional for specialist meals.
    • Climate – Instead of having thousands of small air conditioner compressor inverters in every apartment, have 3 very large and very efficient heat pumps and then efficiently transport the head/cold. Each apartment then has their own fan and climate control system where Liquid Nitrogen and Heat bricks are utilized, a simple refrigerator and freezer also run off the Liquid Nitrogen, removing two more compressors.
    • Data – Fibre runs to each apartment, and then inside is patched to different equipment. A fibre runs to the TV and Ethernet over Power is provisioned and isolated for the apartment so that every appliance and electrical device is controllable. Wireless systems are a feasible alternative.
    • Hygiene – Several banks of showers and toilets on each floor, the transporter takes you to the next available toilet or shower as required. So instead of having a toilet and shower taking up space in each apartment that only gets used 100th of the time in a day, you can be more efficient with a central bank of them. The showers and toilets are self cleaning, with minor cleaning cycles after every use and major clean cycles as required (eg. every half day).
    • Transport – Within the building, the transporter can take you anywhere, but what makes a remote city work well is fast transport to already established city centres. Mono rail is quite expensive and still relatively slow and inefficient when compared to air travel over long distances (about 800Km). There is plenty of scope for new transport ideas:
      • Air evacuated tunnel rail (Super sonic speeds without the risk and fuel of staying aloft)
      • Personal air craft (looking more like aeroplanes and possibly launched by ground based launcher, not those ridiculous artist impressions of cars with 4 loud, fuel guzzling turbine engines)
      • Automated Electronic Vehicle transport
      • Community car pool (basically like small automated buses which only travel along a particular route or highway)
    • Menial Tasks – Clothes/Dish washing is fully centralized and automated. Less tedious work for residents means more time to live – a higher quality of life.
    • Shelter – No one truly owns their space, they can either hold (pay around $50,000 for their entire life) or rent (interest of $50,000 over lifetime)


With a Super city, developed countries have an opportunity to push past the so-called “Modern” boundaries of today and exceed peoples expectations with a completely reinvented society and lifestyle. Super cities are not just technology test beds, they also offer citizens cheaper living for a greater quality of life, less stress – freedom from menial tasks, very short waits for transport and short travelling time.

But even developing countries could stand to benefit. The cost effectiveness of super cities and the efficient systems can help pull poor countries out of poverty. And various novelties could be redeployed into existing cities.

Technology Development Zones: Economic Development Zones for developed nations

How long can we say a combustion engine is modern? Or a toaster or microwave or stove or even lounge rooms? We can’t break a lot of traditions or social norms, but there are definately people out there willing to give it a go. I saw a documentary once about the Chinese Economic Development Zone (EDZ), from what I know, they are small geographical areas which are isolated from the macro economy and regulation, which are used to attract investment. China most famously uses such zones to help their economy grow – allowing western investors to leverage cheap Chinese labour but with western business practices. These EDZs are economic hot spots which eventually flow through the greater Chinese economy. The general idea is developing countries need EDZs to industrialise. I propose that such EDZs should never disappear, even in an advanced industrialised nation. An EDZ in a developed economy should have a technology focus rather than economic – so it is a Technology Development Zone (TDZ) and should be harnessed to further technology, processes, social refinement and regulation. Just like in developing countries the main barriers are culture and law.

I consider TDZs to be important for future seeking, “modernised” societies.  Such people can enter TDZs. There is often cultural resistance to change. A TDZ would attract people and families who are excited to consume new technologies and are open to change. A TDZ will help innovators commercialise, selling to a tight, first mover market. People live in a TDZ voluntarily. Residents of a TDZ are co-operative, possibly innovators themselves and should be able to find employment within a TDZ with a wide range of industries.  They are expected to try out new things, answer weekly questionnaires, contribute feedback and embrace change. People outside a TDZ are more likely to accept change if they have seen it in practice, and investors are also more likely to invest in an idea that can be implemented in a co-operative market. It’s quite possible for the progressive social norms of a TDZ to spread outside of a TDZ, and transform a nation to be more conducive to change.

Many amazing technologies could be developed if everyone had access to all IP. Patents aren’t evil, they are necessary to protect inventors so they may extract value from their inventions, blocking out competitors which didn’t have enough foresight. Unfortunately there are cases where patent holders sit on the patent and don’t commercialise it, with the potential consumers being the losers. There are even cases where companies buy out technology just to stop losing their traditional markets. A TDZ could offer a small community immunity from IP laws, offering tremendous innovation opportunities. IP holders would have priority to commercialise their IP within a TDZ, but if another company wants to build a product (say a fridge) which uses another companies IP (eg. Text to Speech) and the IP owner is not building the same product within the TDZ, then there should be no block. As a result all products which are going to be built for the TDZ should be approved by a Product Register, to avoid product overlap and to negotiate IP priority. I don’t consider such IP law exemptions to be mandatory to the success of a TDZ, however they would have significant benefits.

I have seen evidence where highly competitive markets can detract innovation. The latest craze – eg. iphone – although innovative is already successful in the regular market place and can dishearten local new innovation. The competitors in the smart phone market are super players such as Apple, Google, RIM and Microsoft. Thankfully Google created an open platform which is starting to reduce the monopolistic iPhone dominance. TDZ managers could help isolate fads from inside a TDZ, freeing up consumption capacity for new innovation. Technologies and products within a TDZ should be limited, where possible, to products and technologies not found outside the TDZ. Residents within a TDZ would never have the luxury of settling with a device such as an iPhone. New devices would supercede old ones. For example, the iPhone would have been expected, then the Google Nexus, then a Microsoft Phone 7 phone, and so on. In trials residents should receive significant discounts for such devices, after all they would also be expected to answer questionnaires quite frequently, and sustain a relatively high consumption of technology.

The electric car is a great example for illustrating the need of a TDZ. In a previous article I discussed the resistence to change from the oil and combustion automotive industries. If a TDZ was set up in a small city, a micro-economy could be tooled to demonstrate a society living with electric cars. From that micro-economy the idea could spread to the rest of a country and then the rest of the world. The changes would be gradual and the industries would be able to foresee the success in the TDZ and adapt for the eventual success in the greater community. Within the TDZ regulations would be different: the government could mandate all EV patents illegitimate and road laws would be relaxed, requiring engineer approval for reasonable vehicles. Consider the benefits, innovators would discover the best frontiers for the technology, such as logistics and cost-effective transport for the housebound elderly. Then the technology could move to be used for mainstream transportation use, where the single occupant of a car can be productive while travelling.

Imagine the super futuristic TDZ. There could be social change almost impossible to introduce today due to safety hysteria. You can redesign infrastructure and experiment with new city layouts. Citizens expect to be able to watch a movie or do some work while their travelling, groceries are automatically ordered and delivered, no one does dishes or cooks their own meals, or irons or washes clothes, Internet speeds are 10s of gigabits per second. Such a revolutionary change can only happen in a captive conductive society where change is embraced.

The most effective TDZ would be a purpose built city. It could be close to a capital city, so initial citizens can find work outside, while the local economy and infrastructure is developing. Such a move would require significant convictions by a politician, and cannot be expected of the first TDZ in a nation. A TDZ in itself could be too progressive for a politician of today to call. IP relaxation could have serious political ramifications, but a successful TDZ may significantly outweigh those risks. In any case, a TDZ is something like an invention that can be scaled up in stages. I live in Geelong. Geelong could be declared a TDZ precinct, this could start a demographic shift, seeing technology “thrill seekers” move to the region. At the same time a new suburb can be planned and developed as a micro-TDZ. Depending on the success of a TDZ precinct, a purpose built TDZ may be politically feasible.

The TDZ may very well play a significant part in our future. Leaving behind most traditions and inhibitions, we can begin to understand how society can better adapt to technology. Aside from the ideals of a more modern world, the economic benefits may shadow even the most optimistic expectations. What are the benefits of technology not merely available, but fully embraced by society?

In 1899, the U.S. Commissioner of Patents was famously quoted saying, “Everything that can be invented has been invented.” We must not let ourselves become accustomed to the status quo, we have a lot to learn.


Looks like my idea has been picked up in some form, too bad the team captain is going to lose the game (botch this, just like everything else)