Food Forever?

What if we could save our spoiling food before it was too far gone? I often have half a litre of milk which spoils at the office and I have to tip it down the sink.

I’m no biochemist, so I’m hoping this idea finds a nice home with a real scientist who either debunks it or points the way forward.

Could we have a home appliance which could UHT leftover milk that we can use later or donate?

Are there other foods which could be preserved in such a way? I’m guessing most would be an ultra heat process. Like an autoclave, you need to kill all the bacteria with no regard for taste. If it’s meat, it might be tough, but it would at least be a better pet food than what’s in a can.

Problem?

5 Secret Strategies for GovHack

Monday night I attended the VIC GovHack Connections Event. No there wasn’t any pizza.. but there were a selection of cheeses, artichokes and more.

Here are my Top 5 tips

1) Do something very different

This competition has been running for a number of years and the judges are seeing some similar patterns emerging. Browse through previous year’s hacker-space pages, and look at the type of projects they’ve had before. Look at the winners.

2) Evaluate the data

This might be the main aim of your project, but we want quality data for future years, and enough evidence to remove the unnecessary, find the missing, and refresh the old.

3) Prove real-time and live data

Melbourne City have their own feeds of real-time data this year. If you want to see more of that, consider using this data.

4) Simulate data

This strengthens your assessment of missing data [2], could involve a simulated live data feeds [3] above, and would be very different [1].

5) Gather data

This is actually a bit harder than simulating data [4], but very useful. You could use computer vision, web scraping, or make an open app (like OpenSignal) that many people install to collect data.

Off the record

I’ve got a few ideas for GovHack projects in mind on the day. I’m not competing, so come and talk to me on Friday night or Saturday for ideas along these lines.

Try Scope Catch Callback [TSCC] for ES6

So it has started, it wasn’t a hollow thought bubble, I have started the adventure beyond the C# nest [http://blog.alivate.com.au/leave-c-sharp/]. It will take a while, because I still have a lot of software that still runs on C#, and I do still like the language, but all new development will be on ES6 and NodeJS.

So I’m going to record my outlook over a few blog posts. I re-discovered Cloud9 IDE, and I’ve got a few thoughts on architecture and a new feature for ES6.

Today, I’ll tell the world about my proposed ES6 enhancement.

Despite the ECMAScript committee stating they are about “Standards at Internet Speed”, there isn’t much Internet tooling in there to make it happen. They have certainly been successful making rapid progress, but where does one submit an idea to the committee? There’s not even an email link. I’m certainly not going to cough up around $100k AUD to become a full member. [Update: They use GitHub, a link on their main website to this would be great. Also check out: https://twitter.com/ECMAScript]

So I’ll be satisfied to just put my first ES6 idea here.

Try blocks don’t work in a callback world. I’m sure there are libraries which could make this nicer. In C# Try blocks do work with the async language features for instance.

So here is some code which won’t catch an error

try
{
    $http.get(url).then((r) => {
        handleResponse(r);
    });
}
catch (e)
{
    console.log(e);
}

In this example, if there is an error during the HTTP request, it will go uncaught.

That was simple, though. How about a more complex situation?

function commonError(e) {
    console.log(e);
}

try
{
    runSQL(qry1, (result) => {
        doSomethingWith(result);
        runSQL(qry2, (result) => {
            doSomethingWith(result);
        }, commonError)
    },commonError);
}
catch (e)
{
    commonError(e);
}

Callback nesting isn’t very nice. This is why `await` is pushed forward as a good candidate. But what if the API you target doesn’t implement Promise? What if you only sometimes define a try block?

My proposal is to supply a method which gets the Try Scope Catch Callback [TSCC]. If you don’t return a promise, it would be like this:

function get(url, then, error) {
  var error | window.callback.getTryScopeCatchCallback(); //TSCC

  //error occurs:
  error(e); 

  //This could be reacting another 
  //try/catch block or as a result 
  //of callback from another error method
}

Promises already have a catch function in ES6. They’re so close! A Promise should direct its the error/catch callback to the TSCC by default. If the Promise spec was updated to include this, my first example of code above would have caught the error with no changes in code.

So what do you think ECMA members, can we get this into ECMAScript?

Feedback log – from [email protected] maillist

  • kdex

Why not just transform callback-based APIs into `Promise`s and use (presumably ES2017)
`await`/`async` (which *does* support `try`/`catch`)?

e. g.:
“`js
try {
await curl(“example.com“);
/* success */
}
catch (e) {
/* error */
}

  • My response

1. Whether you await or not, the try scope’s catch callback [TSCC] should still be captured.

2. If there is no use of Promise (for coders own design reasons) the try scope’s catch callback [TSCC] should be available

GovHack – Do we need real-time feeds?

It’s the year 2016, and we still don’t know how many minutes away the next bus is in Geelong.

Public releases of data take time and effort, and unless they are routinely refreshed, they get stale. But there’s certain types of information that can’t be more than minutes old to be useful.

Traffic information is the most time sensitive. The current state of traffic lights, whether there are any signals currently out of order, and congestion information is already collected real-time in Australia. We could clearly benefit from such information being released as it happens.

But imagine this benchmark of up-to-the-minute was applied to all datasets. First of all you won’t have any aging data. But more importantly it would force the data publication to be automated, and therefore scalable so that instead of preparing another release of data, public servants could be focusing on the next type of data to make available.

What do you think?

Participate in GovHack this year, play with the data we do have and continue the conversation with us.

(I will be publishing a series of blogs focusing on GovHack, exploring opportunities and challenges that arise and consider while I work on the committee for the Geelong GovHack which runs 29-31 July 2016)

Image courtesy Alberto Otero García licensed under Creative Commons

GovHack – What tools will you use this year?

The world is always changing, and in the world of technology it seems to change faster.

You certainly want to win some of the fantastic prizes on offer, but remember, we want world changing ideas to drive real change for real people, and we can do that best together.

So share with us and your fierce competitors, which new tools and techniques you plan to use this year.

Some new popular that I’m aware of, include Kafka and MapMe.

Both of these feed into my own personal desire to capture more data and help Governments release data real-time. Check them out, and please comment below about any tools and techniques you plan to use this year.

(I will be publishing a series of blogs focusing on GovHack, exploring opportunities and challenges that arise and consider while I work on the committee for the Geelong GovHack which runs 29-31 July 2016)

Image courtesy RightBrainPhotography licensed under Creative Commons

What data do you want to see at GovHack?

Lets forget about any privacy and national security barriers for the moment. If you could have any data from Government what would you request?

GovHack is a great initiative which puts the spotlight on Government data. All of the departments and systems collect heaps of data every day, and lucky for us they’re starting to release some of it publicly.

You can already get topological maps, drainage points, bin locations, bbq locations, council budget data and much more. But that’s certainly not all the data they have.

Comment below on what data you would think is useful. It might already be released, but it would be interesting to go to Government with a nice long shopping list of data to be ready for us to delve into next year.

(I will be publishing a series of blogs focusing on GovHack, exploring opportunities and challenges that arise and consider while I work on the committee for the Geelong GovHack which runs 29-31 July 2016)

Image courtesy Catherine, licensed under Creative Commons

GovHack – How can we collect more data?

If we had all the cancer information from around the world, any keyboard warrior could wrangle the data and find helpful new discoveries. But we struggle to even complete a state-level database let alone a national or global one.

After being dazzled by the enormous amount of data already released by Government, you soon realise how much more you really need.

For starters, there are lots of paper records not even digital. This isn’t just a Government problem of course, many private organisations also grapple with managing unstructured written information on paper. But if Governments are still printing and storing paper in hard copy form; we further delay a fully open digital utopia. At the very least storing atomic data, separate to a merged and printed version enables future access, and stops the mindless discarding into the digital blackhole.

Then consider all the new types of data which could be collected. The routes that garbage delivery trucks and buses take and the economics of their operation. If we had such data streams, we could tell citizens if a bus is running ahead or behind. We could have GovHack participants calculate more efficient routes. Could buses collect rubbish? We need data to know. More data means more opportunities for solutions and improvement for all.

When you consider the colossal task ahead of Government, we must insist on changing culture so that data releases are considered a routine part of public service. And also make further data collection an objective, not a bonus extra. Until that happens, large banks of knowledge will remain locked up in fortresses of paper.

What do you think? Do you know of any forgotten archives of paper that would be useful for improving lives?

Participate in GovHack this year, play with the data we do have and continue the conversation with us.

(I will be publishing a series of blogs focusing on GovHack, exploring opportunities and challenges that arise and consider while I work on the committee for the Geelong GovHack which runs 29-31 July 2016)

Image courtesy Fryderyk Supinski licensed under Creative Commons

Why I want to leave C#

Startup performance is atrocious, critically, that slows down development. It’s slow to get the first page of a web application, navigating to whole new sections, and worst: initial Entity Framework LINQ queries.

It’s 2016, .Net is very mature but this problem persists. I love the C# language much more above Java, but when it comes to the crunch, the run-time performance is critical. Yes I was speaking of startup performance, but you encounter that in new areas of the software warming up and also when the AppPool is recycled (scheduled every 13 hours by default). Customers see that most, but it’s developers who must test and retest.

It wastes customers and developers time. Time means money but the hidden loss is focus. You finally get focused to work on a task, but then have to wait 30 seconds for an ASP.NET web page to load up so you can test something different. Even stopping your Debugging in VS can take 10s of seconds!

There are told ways to minimise such warmup problems, with native generation and EF query caching. Neither are a complete solution. And why workaround a problem not experienced in node.js and even PHP!

.Net and C# are primarily for business applications. So how important is it to optimise a loop over millions of records (for big data and science) over the user and developer experience of run and start with no delay?

Although I have been critical of Javascript as a language, recent optimisation are admirable. It has been optimised with priority for first-use speed, and critical sections are optimised as needed.

So unless Microsoft fix this problem once and for all, without requiring developers to coerce workarounds, they’re going to find long term dedicated coders such as myself shifting to Javascript, especially now that ECMAScript and TypeScript make Javascript infinitely more palateable.

I have already recently jettisoned EF in favour of a proprietary solution which I plan to open source. I also have plans for node.js and even my own IDE which I plan to lease. I’m even thinking of leaving the Managed world altogether – Heresy!

.Net has lots going for it, it’s mature and stable, but that’s not enough anymore. Can it be saved? I’m not sure.

Busted! Internet Community Caught Unprepared

Internet Security (TLS) is no longer safe. That green HTTPS word, the golden padlock, all lies. The beneficiaries: trusted third parties who charge for certificates. Yes, it sounds like a scam, but not one actively peddled, this one is from complacency from the people who oversee the standards of the internet. Is there bribery involved? Who knows.

A month ago, there were no problems with TLS. Because it was only the 6th of October when a paper was published which paves the way to build machines which can break TLS. These machines are called Quantum Computers. So where’s the scam?

The nerds behind the Internet, knew long ago about the threat of developing such a machine. They also knew that new standards and processes could be built unbreakable even by a Quantum Computer. But what did they do? They sat on their hands.

I predicted in 2010 that it would take 5 years before a Quantum Computer would be feasible. I wasn’t specific about a mass production date. I was only 4 months out. Now it’s feasible for all your internet traffic to be spied on, including passwords, if the spy has enough money and expertise. But that’s not the worst part.

Your internet communication last year may be deciphered also. In fact, all of your internet traffic of the past, that you thought was safe, could be revealed, if an adversary was able to store it.

I wrote to Verisign in 2010 and asked them what they were doing about the looming Internet Emergency, and they brushed my concern aside. True, users have been secure to date, but they knew it was only a Security Rush. Like living in the moment and getting drunk, not concerned about tomorrow’s hangover, users have been given snake oil, a solution that evaporates only years later.

All of these years, money could have been poured into accelerated research. And there are solutions today, but they’re not tested well enough. But the least that could be done is a doubling of security. Have both the tried and tested RSA, as well as a new theoretically unbreakable encryption, in tandem.

Why is there still no reaction to the current security crisis? There are solid solutions that could be enacted today.

Updates

The Fraying of Communication and a proposed solution: Bind

In medicine the misinterpretation of a doctors notes could be deadly. I propose that the ambiguity, of even broader discourse, has a serious and undiscovered impact. This problem needs to be researched, and will be expounded further but I would like to explore a solution, which I hope will further open your understanding of the problem.

As with all effective communication, I’m going to name this problem: Fraying. For a mnemonic, consider the ends of a frayed string being one of the many misinterpretations.

His lie was exposed, covered in mud, he had to get away from his unresponsive betraying friend: the quick brown fox jumped over the lazy dog.

That’s my quick attempt of an example where context can be lost. What did the writer mean? What can a reader or machine algorithm misinterpret it to mean? Even with the preceding context, the final sentence can actually still be interpreted many ways. It’s frayed in a moderate way with minor impact.

In this example, it would be possible for the author to simply expound further on that final sentence, but that could ruin the rhythm for the reader (of that story). Another method, is to add such text in parenthesis. Either way, it’s a lot of additional effort by multiple parties. And particularly in business, we strive to distill our messages to be short, sharp and to the point.

My answer of course is a software solution, but one where plain text is still handled and human readable. It’s a simple extensible scheme, and again I name it: Bind (going with a string theme).

The quick [fast speed] brown fox [animal] jumped [causing lift] over [above] the lazy dog [animal]

With this form, any software can present the data. One with understanding of the scheme, can remove the square brackets if there is no facility for an optimized viewing experience. For example:

The quick brown fox jumped over the lazy dog

(Try putting your mouse over the lighter coloured words)

Since the invention of the computer and keyboard, such feats have been possible, but not simply, and certainly not mainstream.

So it would be important to proliferate a Binding text editor which is capable of capturing the intent of the writer.

The benefits of Binding go beyond solving Fray. They add more context for disability accessibility (I would argue Bind is classed as an accessibility feature – for normative people), and depending on how many words are Bound, even assist with language translation.

Imagine Google Translate with a Binding text editor, the translations would be much more accurate. Imagine Google search, where you type “Leave” and hover over the word and select [Paid or unpaid time off work], leaving you less encumbered with irrelevant results.

Such input for search and translation need not wait for people to manually bind historical writing. Natural Language Processing can bear most of the burden and when reviewing results, a human can review the meaning the computer imputed, and edit as needed.

We just need to be able to properly capture our thoughts, and I’m sure we’ll get the hang of it.

Hey, by the way, please add your own narrative ideas for “the quick brown fox jumped over the lazy dog”, what other stories can that sentence tell?

Appendix – Further Draft Specification of Bind:

Trailer MetaData Option:

  • Benefit: the metadata is decoupled visually from the plain text. This makes viewing on systems without support for the Bind metadata still tolerable for users.
  • Format: [PlainText][8x Tabs][JSON Data]
  • Json Schema: { BindVersion: 1, Bindings: […], (IdentifierType: “Snomed”) }
  • Binding Schema: { WordNumber: X, Name: “Z”, Identifier: “Y”, Length: 1}
  • Word Number: Word index, when words are delimited by whitespace and punctuation is trimmed.

Mixed MetaData Option:

  • When multiple preceding words are covered by the Binding, a number of dash indicates how many more words are covered. Bind Text: “John Smith [-Name]” indicates the two words “John Smith” are a Name.
  • The identifiers specified in ontological databases such as Snomed, may be represented with a final dash and then the identifier. Bind Text: “John Smith [-Name-415]” indicates a word definition identifier of 415, which may have a description of “A person’s name”.
  • When a square bracket is intended by the author, output a double square bracket. Bind Text: “John Smith [-Name] [[#123456]]” renders plainly to “John Smith [#123456]”

Digital Things

The “Internet of Things” is now well and truly established as a mainstream buzzword. The reason for its success could be explored at length, however this term is becoming overused, just like “Cloud”. The term has come to mean many different things to different people in different situations. “Things” works well to describe technology reaching smaller items, but “Internet” is only a component of a broader field that we can call Digital Things.

This Digital Things revolution is largely driven by the recent accessibility of tools, such as Arduino, Raspberry Bi and more. Miniaturization of computing that stretches even the definition of embedded computing. Millions of people are holding such tools in their hands wondering what to do with them. They all experience unique problems, and we see some amazing ideas emerge from these masses.

In health, the quantified self may eventually see information flow over the internet, but that’s not what all the fuss is about. Rather, it’s about Information from Things. Measuring as much as we can, with new sensors being the enablers of new waves of information. We want to collect this information and analyse it. Connecting these devices to the internet is certainly useful to collect and analyse this information.

Then there are many applications for the Control of Things. Driverless cars are generally not internet connected, neither are vacuum robots, burger building machines, a novel 100k colour pen or many many more things. It would seem the of the term Internet of Things as inspiration limits the possibilities.

In the end, Digital Things is the most suitable term to describe what we are seeing happen today. We are taking things in our lives which normally require manual work, and using embedded electronics to solve problems, whether it be for information or control, the internet is not always necessary.

Lets build some more Digital Things.

Geelong has a clean slate

I hope you’re done. Q&A was your last chance to detox from any doom and gloom you had left.

The loss of jobs, particularly at Ford, is not a pleasant experience for retrenched workers, but there’s no changing the past. The fact is Geelong now has a clean slate to dream big, and driverless electric vehicles is a perfect fit for the future of manufacturing.

On Q&A last night, Richard Marles was spot on, describing the automotive industry as one of our most advanced in supporting technical innovation in Australia. But ironically, the industry together has missed the boat and was always on a trajectory with disaster.

I have been watching the industry, since 2010. I have observed the emerging phenomenon of the electric vehicle and the needful but lack of interest by our local automotive industry.  I have realised any automation is to be embraced despite the unpleasant short-term job losses. And still we’re about to miss a huge opportunity.

The public forum is full of emotion, desperation, finger pointing, and frankly ignorance.

Geelong, we have a clean slate.

Kindly watch this video, https://www.youtube.com/watch?v=CqSDWoAhvLU, it’s all Geelong needs to drop the past and grasp the future, share it with your friends and call up all the politicians you know. It’s been there the whole time, and this vision for Geelong is all we need to forget our sorrows. You won’t understand unless you see the video. We need to act now.

I have covered Electric Vehicles comprehensively in the past, but they’re today’s reality. We need to aim higher. Do Geelong even know anything about driver-less cars?

People are immediately cautious of change, which is why the technology needs to be tested and tested here in Geelong. This will be a great focal point for our retraining efforts. Imagine cheap transport and independence for the elderly and disabled. Cheaper, safer and faster deliveries. Reduced traffic congestion and elimination of traffic lights – no stopping! Cars that drop you off and pick you up will park out of town – what car parking problem? What will we do with all those empty car park spaces in the city? More green plants and al fresco dining?

But most importantly zero road fatalities. If this is the only reason, it’s all we need.

They are legal in California today. What stepping stones will we take to legalise fully driverless cars in Victoria? These massive technology companies will only move next to hospitable markets. Who is talking to Nissan and Tesla about building the next generation of electric driverless vehicles in Geelong? We have been given a clean slate, there are too many exciting opportunities around to waste any more time on self-pity!

Oh and trust me when I say, that’s just the tip of the iceburg – I’m not telling you everything, find out for yourself. Click all the links found in this article for a start, it’s what they’re for.

Hint: There’s more to come from me, including the idea to start a “Manufacturing as a Service” company for Automotive, just like Foxconn does for electronics in China, inviting the Ford/Alcoa workers, their investment, GRIIF investment, outside investors and Tesla. There’s lots more work to do, but it’ll be worth it.

Some more videos you should really watch:


Mining Space

It’s quite an aspirational idea – to even look at mining asteroids in space. It may feel like it’s something unreachable, something that’s always going to be put off to the future. But the creation of a new company Planetary Resources is real, with financial backers and with a significant amount of money behind them. We’re currently in transition. Government, and particularly the U.S. government is minimizing its operational capacity for space missions, while the commercial sector is being encouraged and growing. For example, Sir Richard Branson’s, Virgin Galactic, as well as other organisations are working toward real affordable (if you’re rich..) space tourism and by extension commoditisation of space access in general, bringing down prices and showing investors that space isn’t just for science anymore, you can make a profit.

I recently read a pessimistic article, one where the break-even price for space mining is in the hundreds of millions of dollars for a given mineral. One needs to be realistic, however in this article, I think the author is being way too dismissive. You see, there are many concepts in the pipeline which could significantly reduce the cost of earth-space transit. My most favored is the space elevator, where you don’t need a rocket to reach kilometers above the earth (although you would likely still need some sort of propulsion to accelerate to hold in orbit).

But as well as being across technology, a critic needs to also be open to other ideas. For example, why bring the minerals back to Earth? Why not attempt to create an extra-terrestrial market for the minerals? It may well cost much more to launch a large bulk of materials into orbit, than to extract materials from an asteroid (in the future). With space factories building cities in space.

Of course, I still think space mining is hopeful at best, let’s balance despair with hopeful ideas.

Enhanced by Zemanta

Introducing OurNet – A community project

I’ve kept it under wraps for a while, but hope to make this a more public project. This lowers any prospects of me making any money from it, but does make it more likely that I will see it happen.

While campaigning and thinking laterally on the issues and technologies of the NBN. I devised some interesting aspects configurations of wireless and networking devices, which could create a very fast internet for very little cost.

Such aspects alone provide little improvement on their own, but together can form an innovative commercialisable product.

Aspect 1 – Directional links are fast and low noise

Nothing very new here. They are used for backhaul in many situations, in industry, education, government and commercially. Use a directional antenna and you can focus all the electromagnetic radiation. On its own this cannot create a 10Gbps internet system to each home for a commercialisable price.

Aspect 2 – Short directional links can be made very fast

Lots of research in this domain at the moment. Think of the new BlueTooth standard, WiGig, UWB and others, all trying to help reduce wires in the home and simplify media connectivity. If rather than connecting house to local aggregation point, we connected house to house, a short link technology could be employed to create links in the order of 10Gbps between houses. But on it’s own this does not create a low latency network. With all those houses latency would add up.

Aspect 3 – Mesh network node hop latency can be driven to practically 0

When you think wireless mesh, you think of WiFi systems. Don’t. I have devised two methods for low latency switching across a mesh. Both involve routing once to establish a link (or a couple of links).

The first establishes a path, by sending an establishment packet with all the routing information, which is popped on each hop, also containing the pathid. Then subsequent data packets include the pathid to be switched (not routed) to the destination. The first method requires buffers.

The second establishes a path, similar to the first method, excepting that rather than a pathid, the node reserves timeslices, at which point the correct switching will occur, a bit like a train track. This one can potentially waste some bandwidth, particularly with guard intervals, however the first method can supplement this to send packets in even the guard intervals, and unreserved timeslices. The second method does not require buffers.

The second method is best for reserving static bandwidth, such as for a phone or video call, or for a baseload of internet connectivity, so HTTP requests are very responsive. The first is for the bursting above the minimum baseload of connectivity.

There is a third (and fourth and fifth – multi-cast and broadcast) method which can be for very large chunks of data which can simply be sent with the route, or routed on demand. Such a method might be completely eliminated though, with too much overlap with the first.

This can be implemented initially with an FPGA, and other components, such as a GPS module for accurate timing (or kalman filtered multiple quartz system). And eventually mass produced as an ASIC solution.

Aspect 4 – Total mesh bandwidth can be leveraged with distributed content

If every house has 4 links of 10Gbps, then you can see how quickly the total bandwidth of the mesh would increase. However this total bandwidth would be largely untapped if all traffic had to flow to a localised point of presence (PoP). That would be the potential bottleneck.

However one could very easily learn from P2P technologies. And the gateway at the PoP could act as a tracker for distributed content across the mesh. Each node could be capable of storing terabytes of data very cheaply. So when you go to look-up The Muppets – Bohemian Rhapsody, you start getting the content stream from YouTube, but then it’s cut after you have established a link to the content on the mesh.

Problems

There are some problems in this grand plan to work through, but it will only be a matter of time for such solutions to be found.

The first is finding the perfect short links. Research thus far has not been on developing a link specifically for our system, at the moment we would be re-purposing a link to suit our needs, which is completely viable. However to gain the best performance, one would need to initiate specific research.

The second is installation, we need to find the best form factor and installation method for each node onto houses. I anticipate that a cohesive node is the best option, all components, including the radios and antenna on the same board. Why? Because every-time you try to distribute, you need to go from our native 64bit communication paths to a Ethernet or RF over SMA etc… Gaining latency and losing speed and/or gaining noise. However, by having everything together, you increase the distance between nodes. One possible compromise could be to use waveform conduit to carry the various links closer to the edge of the house, capped with plastic to prevent spiders getting in.

The final, is a subjective problem. That is of power consumption. However this is a mute point for various reasons. For one, the node can be designed for low power consumption. Secondly, the link technologies need not be high power devices. I’ve seen some near-IR transmitters just come out (not commercially yet), which cost about a dollar each (for the whole lot), and can reach speeds of 10Gbps, and are very low power. Finally, with the falling cost of solar panels, one could incorporate them into the node (with battery), to lower installation costs.

Opportunities / Disruptions

A new paradigm in internet:

OurNet is the current name for a reason. People own their own mobile phone and pay it off on plan. With OurNet you can own your own node, and pay it off on a plan. But in addition, the node your own becomes a part of the actual network as well as the connection to it. It forms the backhaul AND the lastmile! Hence it being everyone’s net. Such a shift in ownership is sure to have some sort of impact on consumers, a new way of accessing the internet and a new paradigm of belonging/contributing.

And of course OurNet achieves fixed-line like infrastructure with wireless technology. Individuals, Businesses and datacentres could have four (and even more) redundant links, with each link able to have multiple redundant paths. This is not possible with the current (domestic) hierarchical model of the internet, where one needs to subscribe to multiple vendors to achieve redundancy costing $$$. You could reliably host your own website from your home or business, to the world!

Faster Mobile communications

With a node on every house, mobile communication can become very fast and have very low contention. Each node can be equipped with an omni-directional antenna for short range communication. In addition a beam-form directional antenna or MIMS turnable antenna can supplement or replace the omni-directional antenna allowing for very high speed, low noise links to mobile communicators.

High Precision Positioning

OurNet is made up of fixed position nodes with super high resolution timing. If this is leveraged, GPS systems can be enhanced to the millimeter opening up further opportunities for driverless cars and the like (they already work well with visual object detection and lidar, but an additional reference point can’t hurt).

Enhanced by Zemanta

Civilisation Manual

Lakshadweep, comprising tiny low-lying islands...
Image via Wikipedia

What would happen if an asteroid struck our planet and left a handful of people to restart civilisation? Or if you and few people washed up on an uninhabited island with nothing but the shirt on your back? Many would picture building huts, scavenging for food, starting some basic crops if possible. But that would be it, the limit. You wouldn’t comprehend completely rebuilding civilisation and luxuries available as they are today. But I do, I’m curious, what would it take? If all you could take with you was a book, what would be written in that book, what does the Civilisation Manual say?

Whenever there is talk of civilisation it seems that all you hear is philosophy, but seldom the practicality of achieving it. I assert that the creation of such a Civilisation Manual would be a useful undertaking, not so much for its hypothetical uses, but rather for the ability to teach how modern economies work. I believe that such a book should be able to contain all, if not more, information taught to children in a school. Such a book might be very large.

There would also be additional questions to be said of the hypothetical end of the world scenario. How long would it take

LONDON, ENGLAND - FEBRUARY 21: The sign for t...
Image by Getty Images via @daylife

to rebuild a civilisation to current day technology? What tools would most quickly speed up the process? Is there a minimum amount of people required for this to work? What level of intelligence is required to execute? Just one genius? How long until the female primeval desire for shopping is satisfied? And the perfect shoe manufactured?

Encyclopaedia Beliana 1
Image via Wikipedia

I would love to see a community website started to collect such information. We already have Wikipedia, but you are not told the intimate detail of how to find iron ore, how to cast iron, how to produce flour from wheat or how to build a crude resistor or capacitor to help you make more refined components. It is this knowledge which is hard to find, perhaps we are forgetting how we build a digital civilisation.

Also, given the opportunity to build a civilisation from scratch, there may be some interesting ideas which could be included, never encountered in history before. For example, the book could focus on automation, relieving the humans from hard and repetitive tasks. This could go even further than what is achieved today. In 10 years, perhaps robots will be washing and ironing clothes, cooking meals, etc..

What a Civilisation Manual should NOT contain:

  • Advertising
  • References to Gilligan’s Island
  • Everything – put in the most useful and if you have time add more.

What a Civilisation Manual should contain:

  • Very brief justifications of suggestions – it’s not a history book, it’s a survival book. It’s good to reassure the reader of the thought which goes into each of the suggestions in the book. Such as, if X happens to a person, cut their leg off. Briefly describing blood poisoning might be more reassuring.
  • Tried and tested procedures and instructions – can a 10-year-old kid work it out, or does it require an academic professor? and do you replace the palm frond roof monthly or yearly?
  • Many appendices:
    • A roadmap to digital civilisation – showing a tree of pre-requisite steps and sections on achieving each of the steps.
    • Recipes – Particularly useful when all you’ve got is coconuts and fish. How do you clean a fish?
    • Inter-language Dictionary – who knows who you’ll be with.
    • Plant Encyclopaedia – Identification of and uses for plants.
    • Animal  Encyclopaedia – Do I cuddle the bear?
    • Health Encyclopaedia – How do I deliver the baby?

And an example of chapters:

  • Atomic coffee maker designed by Giordano Robbiati
    Image via Wikipedia

    Something like “Don’t panic, breathe… you took the right book, in 5 years you’ll have a coffee machine again”

  • Chapter 1: Basic Needs – You’ll find out about these first, food, water, shelter.
  • Chapter 2: Politics and Planning – Several solutions for governing the group should be provided to choose from, a bit like a glossy political catalogue. It won’t contain things like Dictatorship, Monarchy. More like Set Leader, Rotating Leader or The Civilisation Manual is our leader. Planning will mostly be pre-worked in the appendix, where technology succession is described with expected timelines for each item.
  • Chapter 3: Power  – No not electricity, power. This section explains its importance and how to harness power, from wind/water for milling to animals for plowing. Of course the progression of civilisation would eventually lead to electricity.
The book should also contain several pencils, many blank pages and maybe we could sneak it a razor blade. This doesn’t break the rules of only being allowed to have a book. Publishers are always including CD’s and bookmarks…
I think it would be interesting anyway…
Enhanced by Zemanta

Robots – the working class

Rage Against the Machine
Image via Wikipedia

I have found myself considering whether doom would really befall the world if we mass employed robots to do all of our dirty work. Would we be overrun by machines which rose up and challenged their creators? Would our environment be destroyed and over polluted? I think not. In fact our lives would be much more comfortable and we would have a lot more time.

Life on earth got a lot better around the 1800s, the dawn of the industrial age. In the two centuries following 1800, the world’s average per capita income increased over 10-fold, while the world’s population increased over 6-fold [see Industrial Revolution]. Essentially, machines, aka. very simplistic robots made human lives much better. With steam power and improved iron production, the world began to see a proliferation of machines which could make fabrics, work mines, machine tools, increase production of consumables, enable and speed up the construction of key infrastructure. Importantly, it is from the industrial revolution from which the term Luddite originated, those who resisted machines because their jobs were offset.

We now find ourselves 200 or so years later, many of us in very comfortable homes, with plenty of time to pursue hobbies and leisure. There does however, remain scope for continued development, allowing machines and robots to continue to improve the lives of people. It is understood that one or more patents actually delayed the beginning of the industrial age, and of course is why I advocate the Technology Development Zones which have relaxed rules regarding patents. However, I believe there is a very entrenched Luddite culture embedded into society.

Now being the organiser of the campaign NBNOptions.org, I have been accused of being a Luddite myself. However no progress has lasted without a sound business case. Furthermore, Luddites of the industrial revolution were specifically those put out of business by the machines.

Therefore the current Luddites are currently or potentially:

  • The Automotive Industry status quo. – Movement to Electric Cars will make hundreds of thousands redundant. Consider how simple an electric car is {Battery, Controller, Motor, Chassis, Wheels, Steering}, and how complicated combustion engines are with the addition and weight of the radiator, engine block, oil, timing, computer,… And all the component manufacturers, fitters, mechanics and further supporting industries that will be put out of business.
  • The Oil industry (and LN2) – Somewhat linked to the Automotive industry. Energy could very well be transmitted through a single distribution system – electricity – at the speed of light. No more oil tankers, no more service stations, no more oil refineries, no more oil pipelines, no more oil mining, no more petrol trucks, no more oil spills. (The replacement for oil needs to be as economical or more economical – no ideologies here).
  • Transport industry – Buses, Trains, Trucks, Taxis, Sea Freight and even air travel all currently employ many thousands to sit in a seat and navigate their vehicle. Technology exists to take over and do an even better job. It’s not just the safety concerns delaying such a transition but also the Luddites (and patent squatters).
  • Farming – The technology is possible. We could have economical fruit picking machines, and many mega farm operations already have automatic harvesters for grain. Imagine all those rural towns having already under threat of becoming ghost towns having to contend with technology taking replacing hard workers.
  • Manufacturing – Is already very efficient, but we still see thousands of people on production lines simply pressing a button. Most manufacturing jobs could be obliterated with only one or two required to overlook a factory – how lonely.
  • House Wifes – Are possibly not Luddites, given many would relish even more time for leisure and their family, however so many of their tasks could be completely centralised and automated. Cooking and associated appliances could be completely abolished, why buy an oven, dishwasher, sink, fridge, freezer, cupboards, dinnerware, pots, pans, stove, and then spend 1-2 hours a day in the kitchen and supermarket when you could potentially order your daily meals from an industrial kitchen where all meals are prepared by robots for a fraction of the cost and time?
  • Construction – It’s amazing how many people it takes to build a skyscraper or house. Why does it still require people to actually build them? Why can’t houses be mass pre-fabricated by machines in factories then assembled by robots on-site? How many jobs would be lost as a result?
  • Services sector – There are many services sector jobs where software and robots could easily be designed and built to relieve such workers from their daily tasks. Accounting could be streamlined such that all business and personal finances are managed by software completely, with robots now aiding in surgery why can’t robots actually perform the surgery or give a massage, or pull a tooth? Why are there so many public servants dealing with questions and answers and data-entry when we have technology such as that found in WATSON able to take over such tasks? Even many general practitioners are resisting the power available for self-diagnosis – do you think they’ll fund the further development of such tools?
  • Mining – Is as crude as grain farming and could easily be further automated, making thousands and thousands redundant in mines, and even those surveying future mining sites.
  • Education – How important is it to have children learn as much as possible while they’re young (beyond simple skills such as reading, writing and arithmetic), when the whole world could be run by software and robots? When complicated questions can be answered by a computer instead of a professor? Why lock children behind desks for 20 hours a week when they could be out playing?
  • Bureaucracy – With no workers there would be no unions and no union bosses, no minimum wage, no work safety inspector…
  • Military – (Ignoring the ideology of world peace) We already see the success of the UAV, an aircraft which flies autonomously only requiring higher lever command inputs for it’s mission. Why enhance soldiers when you can have robot soldiers? War could even be waged without blood, with the winner having enough fire-power at the end to force the loser to surrender outright (quite ridiculous in reality – I know).
  • Care – There are many employed to look after sick and elderly. Even though the work can be challenging and the pay often low it’s still a job, a job that robots can potentially do instead.
With time such a list could easily be expanded to encompass everyone. Are we all collectively resisting change?
With a world full of robots and software doing everything, what do humans do with 100% unemployment? Do we all dutifully submit our resumes to Robot Inc three times a week? Would we all get on each others nerves? Do we need to work? Would we lose all purpose? Ambition? Dreams?
To best understand how a robot utopia works, just simplify the equation to one person – yourself on an island. You could work everyday of your life to make sure you have enough water, food and shelter or if you arrived on the island with a sufficient compliment of robots you could enjoy being stranded in paradise. Every step in between from doing everything yourself toward doing nothing yourself, sees your level of luxury increasing.
There’s no doubt that the world will be divided into two classes, those that are human and have a holiday everyday, and those that are robots – the working class.
Enhanced by Zemanta

Revisiting DIDO Wireless

A wireless icon
Image via Wikipedia

I’ve had some time to think about the DIDO wireless idea, and still think it has a very important part to play in the future – assuming the trial conducted of 10 user nodes is truthful. Before I explore the commercial benefits of this idea, I will first revisit the criticisms as some have merit, and will help scope a realistic business case.

Analysis

Weaknesses

  • One antenna per concurrent node – The trial used 10 antenna for 10 user nodes. Each antenna needs a fixed line or directional wireless backlink – this would imply poor scalability of infrastructure. [Update: This is likely so, but Artemis claim the placement of each antenna can be random – whatever is convienient]
  • Scalability of DIDO – We are told of scaling up to 100s of antenna in a given zone. I question the complexity of the calculations for spatial dependent coherence, I believe the complexity is exponential rather than linear or logarithmic. [Update: Artemis pCell website now claims it scales linearly]
  • Scalability of DIDO controller – Given the interdependence on signals, is the processing parellelisable? If not this also limits the scale of deployment. [Update: Artemis claim it scales linearly]
  • Shannon’s Law not broken – The creators claim breaking the Shannon’s law barrier. This appears to be hyperbole. They are not increasing the spectrum efficiency, rather they are eliminating channel sharing. The performance claims are likely spot on, but invoking “Shannon’s Law” was likely purely undertaken to generate hype. Which is actually needed in the end, to get enough exposure for such a revolutionary concept.

Neutral

Discussion surrounding neutralised claims which may be reignited, but are not considered weaknesses or strengths at this point in time.

  • Backhaul – Even though the antenna appear to require dispersed positioning, I don’t believe that backhaul requirements to the central DIDO controller need to be considered a problem. They could be fixed line or directional wireless (point to point). [Update: This is not really a problem. Fibre is really cheap to lay in the end for backhaul, it’s most expensive for last-mile. Many Telcos have lots of dark fibre, not being used and Artemis is partnering with Telcos, rather than trying to compete with them]
  • DIDO Cloud Data Centre – I take this as marketing hyperbole. Realistically a DIDO system needs a local controller, all other layers above such a system are distractions from the raw technology in question. And as such, the communication links between the local controller and antenna need not be IP transport layer links, but would rather be link layer or even physical layer links.
  • Unlimited number of users – Appears to also be hyperbole, there is no technological explanation for such a sensational claim. We can hope, but not place as Pro until further information is provided. [Update: It does scale linearly, so this is a fair claim when compared to current Cell topology or if pCell was was limited to exponential processing load]
  • Moving User Nodes – Some may claim that a moving node would severely limit the performance of the system. However this pessimistically assumes a central serial CPU based system controls the system (a by-product of Reardens “Data Centre” claims). In reality I believe it’s possible for a sub-system to maintain a matrix of parameters for the main system to encode a given stream of data. And all systems may be optimised with ASIC implementation. Leaving this as a neutral but noteworthy point.
  • Size of Area of Coherence – Some may claim a problem with more than 1 person in an area of coherence, assumed to be around one half wavelength. How many people do you have 16cm away from you (900Mhz)? Ever noticed high density urbanisation in the country? (10-30Mhz for ionosphere reflection – <15M half wavelength) [Update: demonstrations have shown devices as close as 1cm away from each other – frequency may still be a limiting factor of course, but that is a good result]
  • DIDO is MIMO – No it’s very similar, but not the same and is likely inspired by MIMO. Generally MIMO is employed to reduce error, noise, multipath fading. DIDO is used to eliminate channel sharing. Two very different effects. MIMO Precoding creates higher signal power at a given node – this is not DIDO. MIMO Spatial multiplexing requires multiple antenna on both the transmitter and receiver, sending a larger bandwidth channel via several lower bandwidth channels – DIDO nodes only need one antenna – this is not DIDO. MIMO Diversity Coding is what it sounds like, diversifying the same information over different antenna to overcome wireless communication issues – this is not DIDO. [Update: Artemis and the industry and now standardising calling it a C-RAN technology]
  • 1000x Improvement – Would this require 1000 antenna? Is this an advantage given the amount of antenna required? MIMO is noted to choke with higher concurrency of uses. Current MIMO systems with 4 antenna can provide up to 4x improvement – such as in HSPDA+. Is MIMO limited in the order of 10s of antenna? Many many questions… [Update: This is likely so, but Artemis claim the placement of each antenna can be random – whatever is convenient]

Strengths

  • Contention – Once a user is connected to a DIDO channel, there is no contention for the channel and therefore improved latency and bandwidth.
  • Latency – Is a very important metric, perhaps as important as bandwidth. Latency is often a barrier to many innovations. Remember that light propagates through optical fibre at two-thirds the speed of light.
  • Coverage – It seems that DIDO will achieve coverage and field less black spots than what is achievable with even cellular femtocell. Using new whitespace spectrum, rural application of pCell would be very efficient, and if rebounding off the Ionosphere is still feasible, the answer to high speed, high coverage rural internet.
  • Distance – DIDO didn’t enable ionosphere radio communications, but it does make ionosphere high bandwidth data communication possible. Elimination of inter-cell interference and channel sharing make this very workable.
  • Physical Privacy – The area of coherence represents the only physical place the information intended for the user can be received and sent from. There would be potential attacks on this physical characteristic, by placing receivers adjacent to each DIDO antenna, and mathematically coalescing their signals for a given position. Of course encryption can still be layered over the top.
  • Bandwidth – The most obvious, but perhaps not the most important.
  • [New] Backward Compatibility – Works with existing LTE hardware in phones. Works better if using a native pCell modem with better latency performance particularly. Seamless handoff to cell networks, so it can co-operate.
  • [New] Wireless Power – Akbars (See Update below) suggested this technique could be used for very effective Wireless Power, working over much larger distances than current technology. This is huge!

Novel Strength

This strength needed particular attention.

  • Upstream Contention Scheduling – The name of this point can change if I find or hear of a better one. (TODO…)

Real World Problems

Unworkable Internet-Boost Solutions

I remember reading of a breakthrough where MEMS directional wireless was being considered as an internet boost. One would have a traditional internet connection and when downloading a large file or movie, the information would be sufficiently cached in a localised base station (to accommodate a slow backlink or source) and then forwarded to the user as quickly as possible. This burst would greatly improve download times and a single super speed directional system would be enough to service thousands of users given its’ extreme speed and consumers limited need for large transfers. Of course even such a directional solution is limited to line of sight, perhaps it would need to be mounted on a stationary blimp above a city…

Mobile Call Drop-outs

How often do you find yourself calling back someone because your call drops out? Perhaps it doesn’t happen to you often because you’re in a particularly good coverage area, but it does happen to many people all the time. The productivity loss and frustration is a real problem which needs a real solution.

Rural Service

It is very economical to provide high-speed communication to many customers in a small area, however when talking of rural customers the equations are reversed. Satellite communication is the preferred technology of choice, but it is considerably more expensive, is generally a lower bandwidth solution and subject to poor latency.

Real World Applications

The anticipated shortcomings of DIDO technology need not be considered as deal breakers for the technology. The technology still has potential to address real world problems. Primarily we must not forget the importance/dominence of wireless communications.

Application 1: A system could be built such that there may be 10 areas of coherence (or more), and can be used to boost current technology internet connections. One could use a modest speed ADSL2+ service of 5Mbps and easily browse the bulk of internet media {Text, Pictures} and then still download a feature-length movie at gigabit speeds when downloaded. This is a solution for the masses.

Application 2: DIDO allows one spectrum to be shared without contention, but that spectrum need not be a single large allocation of spectrum, it could mean a small (say 512Kbps) but super low latency connection. In a 10 antenna system, with 20Mhz of spectrum and LTE-like efficiency this could mean 6000 concurrent active areas of coherence. So it would enable very good quality mobile communication, with super low latency and practically no black-spots. It would also enable very effective video conferencing. All without cellular borders.

Applications 3 and 4: The same as Applications 1 and 2, but using a long-range ionosphere rural configuration.

Conclusions

We still don’t know too much about DIDO, the inventors have surrounded their idea with much marketing hype. People are entitled to be cautious, our history is littered with many shams and hoaxes, and as it stands the technology appears to have real limitations. But this doesn’t exclude the technology from the possibility of improving communication in the real world. We just need to see Rearden focus on finding a real world market for its’ technology.

UPDATE

  • [2016-02-25] pCell will unlock ALL spectrum for mobile wireless. No more spectrum reservations. pCell could open up the FULL wireless spectrum for everyone! I hope you can grasp the potential there. Yesterday I read a new section on their website: “pCell isn’t just LTE”. Each pCell can use a different frequency and wireless protocol. This means you can have an emergency communication and internet both using 600Mhz at the same time meters away! In 10 years, I can see the wireless reservations being removed, and we’ll have up to TERABITS per second of bandwidth available per person. I’m glad they thought of it, but this is going to be the most amazing technology revolution of this decade, and will make fibre to the home redundant.
  • [2015-10-03] It’s interesting that you can’t find Hint 1 on the Artemis site, even when looking back in history (Google), in fact the date of 2015-02-19 it reads “Feb 19, 2014 – {Hint 2: a pCell…”, which is strange given my last update date below. Anyway the newest Hint may reveal the surprise:
    • “Massless” – Goes anywhere with ease
    • “Mobile” – outside your home
    • “Self-Powered” – either Wireless Power (unlikely) or to wit that this pCell is like some sort of Sci-Fi vortex that persists without power from the user.
    • “Secure” – good for privacy conscious and/or business/government
    • “Supercomputing Instance” – I think this is the real clue, especially given Perlman’s history with a Cloud Gaming startup previously.
    • My best guesses at this stage in order of likelihood:
      • It’s pCell VR – already found in their documentation, and they just haven’t updated their homepage. VR leverages the positioning information from the pCell VRI (virtual radio instance) to help a VR platform both with orientation as well as rendering.
      • Car Assist – Picks up on “Secure” and the positioning information specified for VR. VR is an application of pCell to a growing market. Driverless is another growing market likely on their radar. Driverless cars have most trouble navigating in built up, busy environments and particularly round abouts. If pCell can help in any way, it’s by adding a extra absolute position information source this cannot be jammed. Of course the car could also gain great internet connectivity too, as well as tracking multiple vehicles centrally for more centralised coordination.
      • Broader thin-client computing, being beyond “just communications”, although one can argue against that – pCell is communications an enabler. This would include business and gaming.
      • Emergency Response. Even without subscription it would be feasible to track non-subscribers location.
  • [2015-02-19] Read this article for some quality analysis of the technology – http://akbars.net/how-steve-perlmans-revolutionary-wireless-technology-works-and-why-its-a-bigger-deal-than-anyone-realizes.html
  • [2015-02-19] Artemis have on their website – “Stay tuned. We’ve only scratched the surface of a new era.…{Hint: pCell technology isn’t limited to just communications}’ – I’m gunning that this will be the Wireless Power which Akbars suggested in his blog article. [Update 2015-10-03 which could be great for electric cars, although efficiency would still be quite low]
  • [2016-06-02] Technical video from CTO of Artemis – https://www.youtube.com/watch?v=2ETMzxkyTv8
    • Better coverage – higher density of access points = less weak or blackspots
    • When there are more antenna than active users, quality may be enhanced
    • Typical internet usage is conducive for minimising number antenna for an area
    • pCell is not Massive MIMO
    • pCell is Multi User Spatial Processing – perhaps MU-MIMO [see Caire’03, Viswanath’03, Yu’04]
    • According to mathematical modelling, densely packed MIMO antenna cause a large radius of coherent volume. Distributed antenna minimises the radius of coherent volume. Which is intuitive.
    • see 4:56 – for a 3D visulasation of 10 coherent volumes [spatial channels with 16 antennas. Antenna are 50m away from users – quite realistic. Targetting 5dB sinr.
    • pCell Data Centre does most of the work – Fibre is pictured arriving at all pCell distribution sites.
    • 1mW power for pCell, compared to 100mW for WiFi. @ 25:20
Enhanced by Zemanta

Phishing Drill – Find your gullible users

Do you remember participating in fire drills in school? I remember them fondly – less school work for the day. I also remember earthquake drills when I went to school in Vancouver for a year. So what to drills do? They educate us about the signs and signals to look out for, and then how to react. I believe spam filters work fairly well (that was a sudden change of subject). I use gmail and spam detection is built-in, however I still do receive the occasional spam message. Education of those who fall for spam and phishing is an important factor in reducing associated problems and scams. If all internet users had their wits about them, we could put spammers and phishers out of the business – and most door to door salesmen. So how do we achieve this without million dollar advertising campaigns?…. Drills. Spam/Phishing Drills, or to be more generic, perhaps Internet Gullability Drills (IGD – everyone loves an initialism).

How do you drill the whole of the Internet? “Attention Internet, we will be running a drill at 13:00 UTC”…. probably definitely not. My proposed method involves every web application, which liaises with their customers by email or is at risk of being spoofed in a phishing scam, to have their own private drills. Such a drill would involve sending out an email message which resembles a real life phishing/spam email. Each time different variables could be used – email structure, sender email, recipients name, a direct link to a spoof site. In any case the drill should be able to detect those who fall for the drill. They can then be notified of their stupidity in the matter in a more delicate way than most would – “Haha – you just fell for our IGD you loser!”, is way off.

Ultimately a Gullability prevention centre website would exist which the users could be referred to, so they may refresh themselves in current threats, how to identify them and how to react. Quite a simple solution, and maybe I’m not the first one to think about it, I didn’t bother searching the Internet for a similar idea…

 

Creativity. Just Pulleys and Levers.

We are all amazed when we see a magician pull a rabbit out of the hat. Growing up as a kid, I was intrigued by magic tricks and always tried to work out how they were done. The slight of hand, the magnets, the hidden cavity. They would have you believe that they were achieving something no other man could, that they had a unique power. Now we see TV shows which unveil even some of the more elaborate illusions. Creativity is the last remaining magic trick. Culture seems to idolize and mystify it. “It’s a gift”, “It’s a talent”, “They must use the right side of their brain a lot”. Paintings and artworks are highly prized, some running into the millions of dollars. The creative process in the mind seems elusive and magic. Society seems to think that creativity is only for a select few. I believe they’re wrong. I believe creativity is a simple process of random noise and judgement, two very tangible logical concepts. This doesn’t take away from the impact of the product of creativity, but it does debunk the super human status of the skill.

Creativity doesn’t just happen once in an artwork, it happens multiple times at different levels, but always with the same components of random noise and judgement. A painter may start with a blank canvas and no idea of what they will paint. They then recall memories, images and emotions which all feed as both random noise and experience for judgement. They then choose a scene, the first round of creativity has occurred. The painter will not recall perfectly all the details of the scene, but will have to choose how the scene would be composed. In their mind they imagine the horizon, the trees, perhaps a rock, or a stream, each time picturing in their minds different locations and shapes and judging aesthetic suitability. Another round of creativity has occurred, with many more elements of creation. Once painting, a painter may think ahead of the texture of the rock, the direction of the stream, the type of tree, the angle and amount of branches, the amount of leaves and the colours. More creativity. They may stand back and look at what they have painted and decided to change an element. In this case, their single painting is one possibility of randomization and they have judged it to be substandard. They then picture other random forms and corrections and judge the most appropriate course of action.

At this point it’s very important to comprehend two components of art. Design and Performance. Once a painting has been designed it can be very easy to reproduce – or perform. Indeed, the painter may have refined their design through performance, however they are left with a blue print at the end for reproduction. Music is constructed in the same way, and is easily reproduced by many musicians. Instead of picturing, like the painter, they can hear different melodies in their mind. Many musicians fluke a new melody. They make a mistake on their instrument or purposefully allow themselves to play random notes. With judgement they select appropriate phrases. Lyricists, have a sea of words and language moving through their mind, and often randomise and with judgement emerge with a final set of lyrics, sometimes with a theme. As with the painter, the final musical product can be recorded and as with a painting, a song can be reproduced as a performance.

Randomisation can be, and is most often external. Anything we can receive at low level through our five senses and at a higher level through those senses, such as emotion. An executive is often presented with several options, and uses judgement to select the most appropriate. They are not producing a painting or song, however their process is still creativity – to society a rather boring form of creativity. Software development is considered a very logical process, however the end product is legally considered copyrighted literature. How could something so logical be attributed a magic like status? This always played on my mind, however understanding creativity as noise and judgement in design and performance cycles, helped to bring creativity back to the mortal domain, and consequently allow myself to accept software as art. It’s amazing that any artist who reads this article, will probably be up in arms – software isn’t art. It’s the same as uncovering the workings of a magicians trick – no there’s more to it – they would say. They are merely trying to protect their status – which doesn’t bother me. I like the occasional magic show.

PS.

An expanded creativity formula:

R = Randomisation
J = Judgement
C = Creativity

C = J(R) – “Creativity is a function of Judgement of Randomisation”, as described above.

A break down of the formula’s components – and further insight of my perceptions of the lower level concepts – (more for myself to map it out)

E = Experience
A = Article to be judged – Perceptions though senses and feelings

J = F(A,E,K) – Judgement is a function of Knowledge, Experience against an Article to be judged

M = Memory
SFJ = Senses and Feelings and Past Judgement

E = M(SFJ) – Experience is a class of Memory, that of senses, feelings and past judgement

 

IPTV – How to conquer the livingroom

It’s embarrassing watching the video entertainment products coming out at the moment. They’re all trying to come up with the winning combination, and no one is succeeding – even Apple failed with their Apple TV product. The problem is that their trying to invent some expensive lounge room swiss army knife, when what customers need is simplicity. They are failing to see the primary barrier – no one has IP enabled TVs.

Here’s my forumula to conquer the livingroom:

  1. All new TVs should be IPTV enabled with a gigabit ethernet port – this may include an OnScreen display to surf the web etc., but basically it should support “Push IPTV”
  2. IPTV Adaptor – Develop a low cost IPTV to TV device – which simply supports “Push IPTV”. Eg. Converts Packets into an HDMI signal.
    • I want a company to develop an ASIC
    • It accepts converts streamed video content (of the popular formats)
    • The chip supports outputs into HDMI,  Component, S-Video or Composite
    • The chip is implemented into 4 different products: IP-HDMI, IP-Component,IP-S-Video, IP-Composite

With that barrier clear, you don’t need to fork out to buy another gadget for you living room, you simply leverage your PC or laptop, pushing streaming video to any display in your home. When you connect your IPTV Adaptor to the network, it announces its self and all media devices and media software can then push streaming video to that display.

So now you can use your Laptop / iPad as a remote. You drag your show onto your lounge room and away you go! While everyone is watching on the TV, you can see thumbnail previews of other IPTV shows currently showing – so your channel surfing doesn’t annoy everyone else 🙂

The Web Security Emergency

We responsible users of the internet have always been wary when surfing the Web. We know that we need to make sure websites use TLS security, we need to see HTTPS and a tick next to the certificate to ensure no one is eaves dropping on information being transmitted.

How wrong we are.

The security industry has long known the weakness of RSA and ECC  – the major cryptography used on the internet –  as well as other asymmetric cryptography algorithms, against a quantum computer. And they have done little, to prepare for the advent of the first quantum computer, because it has always been a futuristic dream. But this position is quickly becoming antiquated, there have been many developments in the last few years which now have scientists projecting the first quantum computer to arrive within 5 years. 5 years isn’t that far away when you consider that your sensitive data could be being recorded by anyone today or even in the past, with a hope to decrypt it in 5 years!

There are people who think that Quantum computers will never come, but they are just burying their heads in the sand. Researchers have already developed one which implements Shor’s algorithm – the one which breaks RSA and ECC – on a chip!

So what is the security industry doing about it now? The threat won’t arrive in 5 years, the internet is insecure today. People are carrying out bank transactions today, believing that the data being transmitted will never be read by an unauthorized third party. Programs and drivers are signed with algorithms which will be broken in 5 years, what will stop malware then? There are also anonymous systems such as Tor and I2P which likely use RSA as the basis for their security, in 5 years how many citizens in politically oppressed countries will get the death penalty?

Fortunately there are asymmetric cryptography algorithms which are not known to be breakable by quantum computers, but these have not been standardised or fully researched yet. These can be found at http://spectrum.ieee.org/computing/software/cryptographers-take-on-quantum-computers. So what it comes down to is, that the security industry doesn’t have the answer, and that’s the reason they are not telling anyone of the problem, they’re effectively covering up the truth.

UPDATE:

I’ve seen a lot of rapid developments recently, I’m still optimistic about an RSA breaking quantum computer within 5 years (from June 3, 2010)

http://arstechnica.com/science/news/2012/04/doped-diamond-sends-single-photons-flying.ars

UPDATE:

The commercially available D-Wave (Quantum Annealling) can factorise numbers, according to some of their marketing, and this stackexchange question. The StackExchange question also describes the currently perceived limits of D-Wave or Quantum Annealling in general, estimating that N^2 qubits are required for an N bit prime. The current DWave is only 512 bits.

If the amount of bits were to double annually, then 1024 bit SSL encryption would potentially be easily cracked by such a device in 11 years.

However, this is what is commercially available. Given enough money it would be conceivable that a Goverment / Military could possess one now. Maybe even the NSA.

UPDATE:

D-Wave cannot break today’s SSL web encryption:

The optimizer they now claim to have is restricted to problems that can be mapped to an Ising model—in other words, the computer is not universal. (This precludes Shor’s algorithm, which factors integers on a quantum computer.)

http://arstechnica.com/science/2013/08/d-waves-black-box-starts-to-open-up/2/

UPDATE

I’ve got less than a year left on my 5 year prediction, but I have finally found a scientist themselves make a prediction, it would not be unreasonable to think US DoD could have this already, or within a year, but it would be most practical to simply say I was possibly out by 5 years. So effectively the warning starts today!

They hold out the possibility of a quantum computer being built in the next five to 15 years.

see http://www.abc.net.au/pm/content/2014/s4105988.htm

UPDATE [2015-09-30]:

Even the NSA are worried about the post-quantum computing world, see: http://hackaday.com/2015/09/29/quantum-computing-kills-encryption/

UPDATE [2015-10-14]:

Maybe my prediction was right (only out by 4 months): http://www.engineering.unsw.edu.au/news/quantum-computing-first-two-qubit-logic-gate-in-silicon

Apparently it is feasible to build a quantum computer today. One that can defeat all encryption used in internet communication today (as long as that data is wire tapped and stored). Although it may take 5 years for mass scale commercialization, I’m sure NSA, FBI and DOD of the USA would be capable of building a quantum computer now, if they didn’t already have one.

The breakthrough by UNSW, could very well have been discovered earlier in secret. So this has implications for international espionage today, broader law enforcement in years, and the whole underpinning of the internet security in 5 years.

Using WiFi and searching Google via HTTPS? In 5 years, the owner of the Access Point could very likely decrypt your searches, and other information including bank passwords.

The only secure encryption today requires a password to be entered on each end of the communication channel.

Further Reading

http://en.wikipedia.org/wiki/Quantum_computer

http://www.newscientist.com/article/dn17736-codebreaking-quantum-algorithm-run-on-a-silicon-chip.html

http://www.itnews.com.au/News/213800,toshiba-invention-brings-quantum-computing-closer.aspx

http://www.nature.com/nature/journal/v460/n7252/pdf/nature08121.pdf

http://spectrum.ieee.org/computing/software/cryptographers-take-on-quantum-computers

Super city: Pushing the technology boundaries

In the last article I discussed the concept of Technology Development Zones. This concept can be taken all the way with what we can call a super city. I started with this idea after thinking, what could I do with $1bn. After finishing with dreams of a house on the moon or a medieval castle in the mountains, I started jotting down some points.

Why can’t we start building an entirely new, entirely futuristic city? When you start from scratch, you can benefit from having no boundaries.

Australia so happens to be the perfect place for such an idea. A good economy. A housing shortage.

The Detail

I’ll try to keep it short

  • The city is a sky scraper – providing spectacular views for all residents. ie. 500m high, 500m wide, 40m deep, accommodating a little less than 50,000 people.
    • This reduces the human footprint, with all services contained within a single building. The only reason for people to leave the building is for recreation and farming.
  • It’s located at least 300km from Melbourne – reducing city sprawl
  • But it’ll only take you 30mins to travel 300km in any direction – see Transport below
  • Implements a “Base Luxury Standard”. A body corporate scheme, to operate on economies of scale.
    • Logistics – Cater for all logistics problems in one solution – Let’s call it a Transporter
      • A 3D “elevator” system
      • Elevator capsules which can carry up to 10 people and a few tonne
      • Can travel up/down, left/right, and back/forth
      • EG. Move from the first floor at the front of the building in the middle laterally, to the top floor at the back of the building on the left without “changing elevators”
      • Transporter capsules travel laterally along what would normally be the hallway for walking to your apartment
        • When travelling laterally to an apartment, the transporter doors and apartment doors open together
        • In an emergency, the apartment doors can be manually opened and occupants can walk down the lateral transporter shaft
          • Manual overrides are detected by the system and transporters for the entire floor are speed reduced and obstacle detection is activated to avoid collision with people.
      • Keep in mind that in an emergency, transporters should still be operational laterally, as there is no danger of dropping.
      • Transporters are not just used to transport people but also:
        • Food – Washable containers, transport prepared food, cutlery etc.. from kitchens, used containers are returned to be washed.
        • Heating / Cooling – Heat bricks or molten salts and LN2 packs for refrigeration, air conditioning and heating
          • No pipes = less cost, no maintenance
        • Water – A set of dedicated water transporters are used to fill small reservoir in each apartment
          • No pipes = less cost, no maintenance
          • Bathroom and commercial facilities do have pipes
        • General Deliveries – Furniture, clothing, presents, mail, dirty/washed clothes etc…
        • [Not Data] – That’s fixed line or radio wireless, can’t just transport hard disks, latency is much too slow 🙂
    • Food (Diet) – Set base cost for food every week which is pooled and food providers are then paid for. To start off with, fully automated systems are desirable to peel, slice, etc.., it’s possible to have a fully automated catering system which deals with 80% of meals. The final 20% is catered for by Chefs who still use machines for preprocessing – and are an additional cost. Eg. $5 / person per day for any basic meal and additional for specialist meals.
    • Climate – Instead of having thousands of small air conditioner compressor inverters in every apartment, have 3 very large and very efficient heat pumps and then efficiently transport the head/cold. Each apartment then has their own fan and climate control system where Liquid Nitrogen and Heat bricks are utilized, a simple refrigerator and freezer also run off the Liquid Nitrogen, removing two more compressors.
    • Data – Fibre runs to each apartment, and then inside is patched to different equipment. A fibre runs to the TV and Ethernet over Power is provisioned and isolated for the apartment so that every appliance and electrical device is controllable. Wireless systems are a feasible alternative.
    • Hygiene – Several banks of showers and toilets on each floor, the transporter takes you to the next available toilet or shower as required. So instead of having a toilet and shower taking up space in each apartment that only gets used 100th of the time in a day, you can be more efficient with a central bank of them. The showers and toilets are self cleaning, with minor cleaning cycles after every use and major clean cycles as required (eg. every half day).
    • Transport – Within the building, the transporter can take you anywhere, but what makes a remote city work well is fast transport to already established city centres. Mono rail is quite expensive and still relatively slow and inefficient when compared to air travel over long distances (about 800Km). There is plenty of scope for new transport ideas:
      • Air evacuated tunnel rail (Super sonic speeds without the risk and fuel of staying aloft)
      • Personal air craft (looking more like aeroplanes and possibly launched by ground based launcher, not those ridiculous artist impressions of cars with 4 loud, fuel guzzling turbine engines)
      • Automated Electronic Vehicle transport
      • Community car pool (basically like small automated buses which only travel along a particular route or highway)
    • Menial Tasks – Clothes/Dish washing is fully centralized and automated. Less tedious work for residents means more time to live – a higher quality of life.
    • Shelter – No one truly owns their space, they can either hold (pay around $50,000 for their entire life) or rent (interest of $50,000 over lifetime)

Conclusion

With a Super city, developed countries have an opportunity to push past the so-called “Modern” boundaries of today and exceed peoples expectations with a completely reinvented society and lifestyle. Super cities are not just technology test beds, they also offer citizens cheaper living for a greater quality of life, less stress – freedom from menial tasks, very short waits for transport and short travelling time.

But even developing countries could stand to benefit. The cost effectiveness of super cities and the efficient systems can help pull poor countries out of poverty. And various novelties could be redeployed into existing cities.

Technology Development Zones: Economic Development Zones for developed nations

How long can we say a combustion engine is modern? Or a toaster or microwave or stove or even lounge rooms? We can’t break a lot of traditions or social norms, but there are definately people out there willing to give it a go. I saw a documentary once about the Chinese Economic Development Zone (EDZ), from what I know, they are small geographical areas which are isolated from the macro economy and regulation, which are used to attract investment. China most famously uses such zones to help their economy grow – allowing western investors to leverage cheap Chinese labour but with western business practices. These EDZs are economic hot spots which eventually flow through the greater Chinese economy. The general idea is developing countries need EDZs to industrialise. I propose that such EDZs should never disappear, even in an advanced industrialised nation. An EDZ in a developed economy should have a technology focus rather than economic – so it is a Technology Development Zone (TDZ) and should be harnessed to further technology, processes, social refinement and regulation. Just like in developing countries the main barriers are culture and law.

I consider TDZs to be important for future seeking, “modernised” societies.  Such people can enter TDZs. There is often cultural resistance to change. A TDZ would attract people and families who are excited to consume new technologies and are open to change. A TDZ will help innovators commercialise, selling to a tight, first mover market. People live in a TDZ voluntarily. Residents of a TDZ are co-operative, possibly innovators themselves and should be able to find employment within a TDZ with a wide range of industries.  They are expected to try out new things, answer weekly questionnaires, contribute feedback and embrace change. People outside a TDZ are more likely to accept change if they have seen it in practice, and investors are also more likely to invest in an idea that can be implemented in a co-operative market. It’s quite possible for the progressive social norms of a TDZ to spread outside of a TDZ, and transform a nation to be more conducive to change.

Many amazing technologies could be developed if everyone had access to all IP. Patents aren’t evil, they are necessary to protect inventors so they may extract value from their inventions, blocking out competitors which didn’t have enough foresight. Unfortunately there are cases where patent holders sit on the patent and don’t commercialise it, with the potential consumers being the losers. There are even cases where companies buy out technology just to stop losing their traditional markets. A TDZ could offer a small community immunity from IP laws, offering tremendous innovation opportunities. IP holders would have priority to commercialise their IP within a TDZ, but if another company wants to build a product (say a fridge) which uses another companies IP (eg. Text to Speech) and the IP owner is not building the same product within the TDZ, then there should be no block. As a result all products which are going to be built for the TDZ should be approved by a Product Register, to avoid product overlap and to negotiate IP priority. I don’t consider such IP law exemptions to be mandatory to the success of a TDZ, however they would have significant benefits.

I have seen evidence where highly competitive markets can detract innovation. The latest craze – eg. iphone – although innovative is already successful in the regular market place and can dishearten local new innovation. The competitors in the smart phone market are super players such as Apple, Google, RIM and Microsoft. Thankfully Google created an open platform which is starting to reduce the monopolistic iPhone dominance. TDZ managers could help isolate fads from inside a TDZ, freeing up consumption capacity for new innovation. Technologies and products within a TDZ should be limited, where possible, to products and technologies not found outside the TDZ. Residents within a TDZ would never have the luxury of settling with a device such as an iPhone. New devices would supercede old ones. For example, the iPhone would have been expected, then the Google Nexus, then a Microsoft Phone 7 phone, and so on. In trials residents should receive significant discounts for such devices, after all they would also be expected to answer questionnaires quite frequently, and sustain a relatively high consumption of technology.

The electric car is a great example for illustrating the need of a TDZ. In a previous article I discussed the resistence to change from the oil and combustion automotive industries. If a TDZ was set up in a small city, a micro-economy could be tooled to demonstrate a society living with electric cars. From that micro-economy the idea could spread to the rest of a country and then the rest of the world. The changes would be gradual and the industries would be able to foresee the success in the TDZ and adapt for the eventual success in the greater community. Within the TDZ regulations would be different: the government could mandate all EV patents illegitimate and road laws would be relaxed, requiring engineer approval for reasonable vehicles. Consider the benefits, innovators would discover the best frontiers for the technology, such as logistics and cost-effective transport for the housebound elderly. Then the technology could move to be used for mainstream transportation use, where the single occupant of a car can be productive while travelling.

Imagine the super futuristic TDZ. There could be social change almost impossible to introduce today due to safety hysteria. You can redesign infrastructure and experiment with new city layouts. Citizens expect to be able to watch a movie or do some work while their travelling, groceries are automatically ordered and delivered, no one does dishes or cooks their own meals, or irons or washes clothes, Internet speeds are 10s of gigabits per second. Such a revolutionary change can only happen in a captive conductive society where change is embraced.

The most effective TDZ would be a purpose built city. It could be close to a capital city, so initial citizens can find work outside, while the local economy and infrastructure is developing. Such a move would require significant convictions by a politician, and cannot be expected of the first TDZ in a nation. A TDZ in itself could be too progressive for a politician of today to call. IP relaxation could have serious political ramifications, but a successful TDZ may significantly outweigh those risks. In any case, a TDZ is something like an invention that can be scaled up in stages. I live in Geelong. Geelong could be declared a TDZ precinct, this could start a demographic shift, seeing technology “thrill seekers” move to the region. At the same time a new suburb can be planned and developed as a micro-TDZ. Depending on the success of a TDZ precinct, a purpose built TDZ may be politically feasible.

The TDZ may very well play a significant part in our future. Leaving behind most traditions and inhibitions, we can begin to understand how society can better adapt to technology. Aside from the ideals of a more modern world, the economic benefits may shadow even the most optimistic expectations. What are the benefits of technology not merely available, but fully embraced by society?

In 1899, the U.S. Commissioner of Patents was famously quoted saying, “Everything that can be invented has been invented.” We must not let ourselves become accustomed to the status quo, we have a lot to learn.

Update:

http://blogs.news.com.au/heraldsun/andrewbolt/index.php/heraldsun/comments/more_class_war_as_the_government_robs_business_to_pay_bureaucrats/

Looks like my idea has been picked up in some form, too bad the team captain is going to lose the game (botch this, just like everything else)