The Phone is dead; long live Web Phone

Dial 1 for clock radio sound quality;
Dial 2 for abrupt disconnection.

Please put your life on hold while find the next available inconvenience;
Your call is important to us, and whatever you do:
Don’t. Hang. Up.

Telephony is a terrible experience that disappoints us every day. Long phone numbers, dropped calls, call queues, and monotonous hold music are only some of the problems we’re all familiar with.

We’re using a prehistoric telecommunications system, handcuffed to the past. The obligation of keeping a person on an active and continuous “call” was cemented with the creation of the public telephone system in Berlin, Germany in 1877. The telephone system with analogue lines and people had jobs as “operators” to physically plug in wires to connect you to another person. Improvements were made over the years and to this day. First a transition to automatic connection with pulse dialing, then tones. At first, specialised electronic systems were used, and now computers with data networks over fibre optics. VoIP is the most recent notable extension, but what we ended up with was merely an internet line with the same terrible experience. Continuous “calls” started as a necessity from wires, but that’s what holds back telephony to this day.

That’s all about to change.

The Web is coming to the rescue. With continuous improvement to the web and web browsers, it’s now possible for a fresh approach, and leave the old phone system behind. This isn’t VoIP, this is talking to people through your web browser for free with a User Interface. A Web feature was standardised May 2017 called WebRTC, and this is key to making this possible. This allows your browser to communicate directly with another browser with no phone company inbetween, requiring no software to be installed.

With the web, the experience of telephony can be reimagined and redefined. Picture this –

You visit HordernIT.com.au on your mobile phone and visit the contact page looking for the phone number. Instead of a phone number, you find the Web-Phone button, and you tap it. Up comes an animation showing an attempt to connect the call, then you see the message “Connected”. You put the phone on your ear and start speaking to the person on the other side with High Definition audio quality. If there’s a break in internet connectivity, a new connection can be made, but it’s still known to be you, it’s still the same “call”.

This is just the tip of the iceberg – there is much more that Web-Phones will be able to do, and many problems that it can solve for different industries. Make sure you follow me and HordernIT so you can learn about new capabilities and how they can help you.

The Web Phone has been possible for many years, but the technology pieces still need to polished and packaged, but importantly the telephony culture needs to change. Every website needs to have a Web-Phone button, and the old phone number will remain alongside for quite some time during the transition.

Individual businesses and economies of the world can benefit from greater productivity and better customer service from clearer communication and better reliability.

Todd is a futurist and tech evangelist with HordernIT. Enquire using our contact form if you would like market-leading capabilities within your business.

Packed URL

Shortening services like TinyURL exist because short strings are hard to “compress”. I recently needed to compare a URL. I couldn’t use a URL shortener, because each URL is only used once, and they key changes on each one.

So I built my own algorithm which others will be able to use. This could also be used for QR Code data, and for Web Page Proxies which usually contain the URL as a parameter.

Let’s work backwards from the results:

We use the following input string which is 62 characters:

https://stackoverflow.com/questions/1192732/really-simple-short-string-compression

With no packing and then converting to base64 encoding it’s 112 characters and like this:

aHR0cHM6Ly9zdGFja292ZXJmbG93LmNvbS9xdWVzdGlvbnMvMTE5MjczMi9yZWFsbHktc2ltcGxlLXNob3J0LXN0cmluZy1jb21wcmVzc2lvbg__

With GZip, it needs to include a header, so on short strings you actually go backwards encoding to 124 base64 characters:

H4sIAAAAAAAEAA3L0Q5AMAwF0C-aZjwIf7MsxaLW6S3i73k.Z3NvmIngKe96sy2iT5f1oPNieNEKinHqx6En4yTyBpSjCQdsah7gVuoa.tCMgd9.dGQz6FIAAAA_

So what I achieved is significant and specialised for URLs being only 64x base64 characters, that’s 57% the size of the original base64 string:

uU0ZIOUqiVS7lkSXoZCUhbPphsOceFkkQOuxkzBek524i3mnHIVlGznjI0lJOAs_

The fact that my scheme is “specialised” is important. That’s why mine beats the general GZip compression. My probably wouldn’t beat GZip on larger code files, but that’s not what it’s for.

How it works

It works by digesting a URL in a predictable way. URLs are predictable, they’re like a specific English sentence structure. Here’s a breakdown of the example URLs:

[https://] [] [stackoverflow] [.] [com] [] [/] [questions] [/] [1192732] [/] [really-simple-short-string-compression] []

  • [https://] – The URL scheme is either HTTP or HTTPS
    • 1 bit instead of 56 bits
  • [] – There’s no @, so no username or password – 1 bit = 0
  • [stackoverflow] they are all lower case characters so a 5-bit encoding is used (65 bits), plus header (3-bits) to indicate it’s a 5-bit string, plus 3-bit-block number length (6-bits)
    • 74 bits instead of 104 ~ Running total: 76
  • [.] – In the host section, we expect multiple dots. So after every string section we encode whether there is a dot with a single bit – 1 bit down from 8
  • [com] is in our “host” dictionary which has 16 values so 4-bit, plus the header (3-bit) to indicate it was a dictionary lookup
    • 7-bits instead of 24 bits ~ Running total: 84
  • There are no more dots – 1 bit set to 0
  • [] There’s no :, so no port number override
  • [/] Now we expect folder forward slashes –
    • 1 bit set to 1 instead of 8 bits ~ Running total: 87
  • [questions] is encoded as a 5-bit string –
    • 45 bits + 3 + 6 – 54 bits instead of 72 bits ~ Running total: 141
  • [/] – 1 bit set to 1 instead of 8 bits
  • [1192732] is a number which is encoded as an 8-bit-block number needing 3 blocks + 3bit header – 27bits instead of 56 bits
  • [/] – 1 bit set to 1 instead of 8 bits
    • ~ Running total: 170
  • [really-simple-short-string-compression] – 190bits + 3 + 9 – 202 bits instead of 304 bits ~ Running total: 372
  • There’s no dot – 1 bit = 0
  • [] There’s no question mark for params – 1 bit = 0
  • total: 374

It’s possible to add your own custom values into the segment dictionaries, this would be handy if you can create a custom implementation for your application, you can have common api folder paths across multiple hosts, common subdomain names, common parameter keys, and working with specific TLDs.

Applications

Data Types

  • Expected http scheme – 1 bit
  • Expected dot – 1 bit
  • Expected folder – 1 bit
  • 4-bit string – most popular lowercase English language characters
  • 5-bit string – lower case English alphabet characters, and also – _ + which are common in URLs
  • 6-bit string – base64 using – _ and + for padding
  • GUID – 128-bit
  • Number – 8-bit block (with 1-bit extension)
  • Expected port number – 16-bit ushort
  • (Reserved) – Hex? Base-7 for special chars? Mixed (Dictionary Text)?

Future

It would be nice if it could be further reduced to around 25% of the size 80% of the time, but I don’t think that is possible at all with the current scheme and it is likely theoretically impossible. 50% is likely close to the maximum compression limit.

  • Connect with other people and groups to help contribute. see https://whatwg.org/ (Thanks Paul Brooks)
  • IP addresses as hosts (thanks Paul Brooks!) – IPv4 and IPv6
  • Punycode Support (thanks Paul Brooks!) – for unicode support
  • Fill 3 remaining unassigned characters for the 5-bit text scheme
  • 7-bit base-128 to support special characters that are allowed in URL, ie. { , ! } and more
    • Or, perhaps remove 4-bit text strings replacing that with special characters only, AND then ADD a mixed-type segment type which can include Dictionary, String, Data
  • Distinct URL Text to URL Data Model step – so we can leverage a mature URL library to ensure we don’t mis-parse a URL. Then the URL Data Model can be binary (bitstream) serialised. see https://url.spec.whatwg.org/ and https://www.urlencoder.org/
  • Release the code opensource for others to learn and expand on.
  • PackingHashes (bookmarks) – technically not needed, because servers don’t get sent these values nor process them.
  • Decoder – this is only academic so far with a prototype encoder.
  • My system already includes a 4-bit (16 letters) string encoder for when there’s only lowercase characters and the less common alphabet characters are not used.
  • I intend to ignore uppercase characters when selecting the 5-bit encoding except for parameter values. But some web servers and underlying file systems are case-sensitive so this would be an optional configuration of the encoder.
  • Mixed Dictionary and Letters – allow a match on the dictionary as well as text encoding inside a URL segment. This mode would be run alongside the normal text-only mode and the shorter solution used.
  • Include an English dictionary for common partial word segments like ‘ies’. This might only be useful for longer word segments given the overhead of Mixed Dictionary and Letters.
  • Hardcoded and common dictionaries with a header number to indicate which dictionary set to use. For example they could be regional – Australia has .net.au, .com.au, etc.. for Hosts. Also, popular hostnames such as facebook, google, bing, instagram, etc..
  • ParamsOnly mode – where the prefix is infered by the application context, but the params are the variable part of the URL string.
  • Javascript Version – currently it’s written only in C#.

Ocean Subsurface Tubes

The Hyperloop is a great idea brought back to life by Elon Musk, but it didn’t take long before he realised that tunning these tubes above ground wasn’t feasible, due to the multitude of government jurisdictions, and that existing highway corridors couldn’t be followed at highspeed.

I anticipated this when it was first announced, and tried to contact the various companies trying to develop the engineering for the idea. But I never got a reply, so I’ll publish it here instead.

I watched What If We Built a Road Around the World? today. In it, he writes off Australia as having stretches of bridge too far. But I knew the solution, the same one for Hyperloop. This motivated me to write this today.

If tubes were submerged 50-100 meters below the ocean surface, ships would pass over without any trouble, and storm activity wouldn’t have an impact. They would be neutrally buoyant being made from heavy strong materials, tethered to the sea bed where possible, and may have propellers to counter-currents in-between.

I’m sure there would be more challenges such as ship anchors, but that comes down to engineering detail.

The tube would be built in segments in factories on land and pushed directly to sea. This would make the tubes cheaper to build than above water bridges and seabed tunnelling, plus they would be resellable to be redeployed somewhere else.

So these tubes could be cheaper than bridges for longer lengths (let’s assume > 5km), making it possible for road travel around the world.

But that’s not all. Being in the ocean means deployment in international waters which would eliminate a lot of red tape. The tubes could also support communications, electricity, and oil pipelines.

They could be air-evacuated for hyperloop type travel. And there are many interesting places they could be deployed. Such as on the west coast of the US for North-South travel, also to bypass the Darien Gap, to connect New Zealand to Australia, and Australia up to South-East Asia. With all the islands in the Pacific, there may be an economical route across the Pacific, but a full Atlantic crossing should also be feasible with the higher European-American demand.

I don’t have the money or time to build this, but I hope to travel from Sydney in San Francisco in a few hours one day.

 

Don’t use Resource Strings with C#

I recommend separate static class files with static read-only strings.

Problems:

  • The XML Resource files are hard to source-control
  • The UI for resource strings are hard to scroll through and edit in-place

It’s better when strings are in code-files, with multi-line strings using @”” or $@””. Maybe append “Resources” at the end of the class name, as a convention.

Benefits:

  • You can use any coding techniques with them
  • You can be more cohesive, create multiple separate classes
  • You can have static functions to take and apply parameters
  • You use the normal text editor
  • You can press F12 on a reference, and get directly to editing the string
  • No XML to deal with – less merge conflicts

 

Shorter Work Weeks – A forgotten lever for the Automation Age

It was close to the time of the industrial revolution, when a trade union in England lobbied for 888, 8 hours for work, 8 hours for recreation and 8 hours for sleep. It’s important to note here, that the industrial revolution made quite a few artisans unemployed, but with the wealth from the automation, made this policy change possible.

Hey Elon Musk, Artificial Intelligence will not be bad like you say.

It’s everywhere in the media at the moment [2017-02-19]. Elon Musk crystal balling doom and gloom about automated intelligence, when he’s in a country made rich by automation.

The thing is, the less we do robotic mundane things, the richer economies get, and the more human we become. Please, watch this: https://www.youtube.com/watch?v=AsACeAkvFLY&t=616s, it’s very well articulated.

Now, I have covered all this before in my previous blog article [Robots The Working Class], and there will be pockets of mass unemployment, where Government policy has propped up flailing businesses, but overall, this transition will be quite smooth and again hugely beneficial.

But I have continued thinking about this, and made an important realisation: We need to continue to reduce our work week hours, to keep most people in some sort of traditional employment.

Back in the industrial revolution, this realisation took a while, and required a workers revolt (more about the hours, than sharing the jobs). The sooner Government masters this dusty old lever of working hours, the better.

Rather than campaigning for unemployment benefits, where there’s the damaging problem of those bludging off others, I believe Government should continue to reduce the maximum hours in the work week, and keep more people employed.

This would start in the primary and secondary industries which are being disrupted the most by automation. And would begin as a reduction of another half an hour every 5 years, and increase this pace as needed.

A lot more research needs to be done here, but this will be required as we leave the information age, into the automated age.

(This was written without much review, this article will need more consideration, editing, and I’m hoping to research this whole domain of the work week more and more. BTW, workers might get paid more as their work week reduces, so their pay is the same.)

Geelong needs a Makerspace

No one knows what you’ll discover or create. You’re unique. You have seen things, that others haven’t, and face a set of problems unique to you and your friends.

So when you play with different tools and get exposed to bigger possibilities, there’s no telling what will happen.

Here’s one example of the unexpected you see in a makerspace. I bought this thermal imaging camera, a FlirOne, for an ecology project in Cape Otway, to track predators.

Have you ever looked at a cup of tea like this before?

First you turn on the kettle. You can see the water level where it’s the hottest.

Bring it to the boil. You can’t see that line anymore, the steam has made the pot uniformly hot.

Pour (don’t forget to prepare the tea bag beforehand). Looks like lava.

Get some nice cold milk. You can see the reflection on the kettle (technically it’s not reflecting coldness – I know).

All marble-y – and that’s after mixing the milk. Probably just standard heat convection there.

Here are some videos:

I know of at least one Makerspace, space in the making for Rock O’Cashel Lane, in the CBD. Make sure you get behind Kathy Reid, and Jennifer Cromarty, and make this place legendary.

Technomotive – a new word for a digital age

Tech – no – mo – tive (adjective)

  1. A response in a person to hype over excess quantities and potential, subjective quality, statistics and parameters, and perceived possibilites
  2. Discarding or overriding other important factors in a debate or decision, due to [1]
  3. Examples:
    • Being an audiophile she bought the $5000 cable, blinded by her technomotive weakness.
    • Like any car salesman, they used technomotive language, reading out the 0-100kmph acceleration time, and power output of the engine.
    • The politician knew the 100mbps figure would be technomotive to journalists and tax-payers alike.
    • Technomotive descriptions held the audience’s attention
    • The entire domain of technomotive persuasion is largely unexplored
  4. Related forms:
    Technomotively (adverb)
    Technomotiveness, Technomotivity (noun)

The need for a new word

In ancient Greek times, Aristotle wrote of 4 modes of persuasion: Pathos, Ethos,Logos, and Kairos. They relate to emotional appeal, the personal character of the speaker, the reasoning, and the limited time.  Back then there were no computers or technology as we perceive it, and no conceivable excess quantities. As a result, Aristotle never identified a fourth persuation, Techno.

The key difference in these times, is the pace of change which introduces a tangible factor of obsolescence, and the emergance of an environment of expectations in culture. The classical persuation techniques concern important factors that are wise to consider, but left unbalanced are tools for persuasion. Tools to distract from important factors not being brought to the attention of the audience. A person who considers technomotive factors, is not necessarily technomotively pursuaded, if they balance other considerations well. Although obsolescence is objectively real, this rarely justifies getting the best and paying a premium.

I propose that technomotive persuasion only been a common technique since the Industrialisation Age, for over 100 years, but never had a name. Technomotive, is a word which helps analyze persuasive writing, and arguments wherever excess quantities or qualities are expressed or experienced, but typically in a technology context. This new word provides a handle for analysis, and is the beginning of deeper research of the domain.

A work in progress

I identified the need for this word about 6 years ago, with several attempts to articulate and refine. I hope others will find it useful and contribute more ideas and research in this domain. I’ll continue to write more below, but hopefully the writing above is a sufficient and stable definition to move forward with.

Further examples

Lots of examples can be found in marketing material and article headlines, here are some examples of Technomotive language (in quotes):

  •  “Experience mind-blowing, heart pumping, knee shaking PhysX and …” – Technomotive language, appeals to the desire to have the best possible gaming experience.
  • “Today is so yesterday” – Technomotive language, appealing to desire to have the latest technology
  • “Planning to overclock your shiny new Core i5 system? Kingston reckons it has the RAM you need.” – Technomotive language, appeals to the desire to have the latest and most powerful technology.
  • The skyscraper was made up of 4,000T of steel and concrete
  • The new dam holds 4000GL of water, enough to…
  • Perhaps overordering of stock could be considered “excessive quantity”, when it is later found to be useless because there are no buyers (they probably never did their market researchers, and were persuaded by the manufacturers that it would sell).

More on Ignoring Economics

A good example is building one’s own PC. People with the money will often splurge on the best of everything, followed by benchmarking, to feed their technomotive desire for performance. When economics is considered, this isn’t the best choice. Last years technology performs 20% less, but will cost 50-80% less. Economics is less of a consideration when someone is driven by technomotive desire.
In decisionmaking, in the case of building a PC, it might be for gaming. One might justify the additional cost for the better quality of gameplay (another technomotivation). Rather than considering that economics are unfavorable in a judgemental tone, one should rather reflect that technomotive desires have the biggest influence.

Ideas

Which may be used to update the core definition or the section [need for a new word]

  • “Excessive Quantities” – this refers economically to depreciating assets, or latent value never eventually needed
  • “Accentuate quantities”
  • Informal word: “drooling”
  • “While in the ancient times, some might have desired more land relative to others, or more jewlery, I would suggest that was more tactical and economical compared to modern times where one desires the inclusion of an optional turbo charger in a car.”
    • It is only in modern times that people are more wealthy. In ancient times, less were wealthy.
    • The wealthy would have certainly sought lavish luxuries, most of which projected the wealth of the owner by social convention.
    • I wouldn’t consider any larger volumes of goods in store to be “excessive quantities”. They can be sold and traded for a similar value.
  • Decouple from obsolescence – while not in the definition, it is in the detail. Obsolescence is related, but I suspect should be kept separate to clarify the definition of technomotive. The more existing terms are explored and used, the better Technomotive can be refined.
    • Technomotive – quanitities.
    • Obsolescence –
    • Nostalgia – One doesn’t think of an old computer in terms of Technomotive, we consider it obsolete, but it can have appeal by way of nostalgia.

Keywords

  • Persuasion
  • Horsepower
  • Kilowatt
  • Speed
  • Power

Personal Drones – Flying “Cars” for All

Great ideas and progress rarely come from well-worn paths. How long have we waited for Flying Cars? Many have tried turning cars into sort of planes, or jet hovering machines.

Now it’s possible. Not by making cars fly, but making drones bigger to carry people.

Drones are main-stream and mature. The industry grappled with air-space and privacy rules and created auto-pilot, stability systems, and backup redundancy. Engineers have been reinvigorated to develop new algorithms and mechanical structures.

All of this is great for personal transport through the skies. With Flying Cars, we were expected to have a recreational pilot license, and although those engineers would have dreamed of auto-pilot, that was unobtainable. Drones have been a key stepping stone, and newfound success of electric vehicles also pave a new path.

I suspect there are 10-20 years to go. The most critical element remaining is battery capacity. There are workarounds and hybrids, but when batteries get a science boost you’ll see a race to market from many key companies.

So stop hoping for Flying Cars, and start saving for your Personal Drone Transport. (And hopefully they find a good name for it)

Updates:

Why did Open Source Bounties Fail?

I’m shocked. I thought Bounties would supercharge Open Source development. You were the chosen one! (cringe)

So today, I wanted to post a bounty for Stasher. I did so on BountySource, but then I realised it was broken and abandoned. I looked further afield and it’s the same story, a digital landscape littered with failures.

Bounty Source

Is one of the better ones, limping along. They need a serious financial backer to grow their community faster.

  1. They seem to have a lot of server issues. Have a look at their recent twitter feed [https://twitter.com/Bountysource]
  2. When I posted my bounty, I did expect a tweet to go out from their account (as per my $20 add-on). Nothing. Either that subsystem is broken, or it has never been automated.
  3. Bounty Search is broken – “Internal server error.” in the console log.
  4. We know what I think about good security architecture. If people can’t talk about security correctly, it doesn’t matter if they know about bcrypt, but can they properly wield its power?
  5. No updates on Press since 2014

Freedom Sponsors

They don’t have enough of a profile, to excite me about their future. This has been executed on a shoe-string budget apparently. (I’ll try posting a bounty here if the Bounty Source one lapses)

  1. Only 12 bounties posted this year (Jan-Nov) – only 4 of those have workers, 2 of those look inactive. But at least search works.
  2. Their last Tweet was 2012

Others

http://bountyoss.com/, http://cofundos.com/ are down.

Analysis

This shouldn’t have happened. It failed because these startups ran out of cash and motivation.

There is massive potential here. So far we’ve seen MySpace, we need Facebook execution. And whoever does this, needs a good financial backer with connections to help grow the community.

I hope to see an open source foundation, maybe Linux Foundation, buy Bounty Source.

Stasher – File Sharing with Customer Service

(This is quite a technical software article, written with software coders in mind)

It’s time for a new file sharing protocol. P2P in general is no longer relevant as a concept, and central filesharing sites show that consumers are happy with centralised systems with a web interface. I think I have a good idea for the next incremental step, but first some history.

It’s interesting that P2P has died down so much. There was Napster and other successes which followed, but BitTorrent seems to have ruled them all. File discovery was lost, and with Universal Plug and Play a big security concern, even re-uploading is not on by default.

P2P is no longer needed. It was so valuable before, because it distributed the upload bandwidth, and also anonymised somewhat. But bandwidth continues to fall in price. MegaUpload and other like it were actually the next generation, and added some customer service around the management of the files, and charged for premium service. Dropbox and others have sort of carved out even more again.

Stash (which is hopefully not trademarked), is my concept to bring back discovery. It’s a different world, where many use VPNs and even Tor, so we don’t need to worry about security and anonymity.

It’s so simple, it’s easy to trust. With only a few hundred lines of code in a single file, one can compile their own, on Windows in seconds. So there can be no hidden backdoors. Those users who can’t be bothered with that, can download the application from a trusted source.

It works by being ridiculously simple. A dumb application which runs on your computer, is set-up to point to one or more servers. It only operates on one folder, the one it resides in. From there the servers control Stasher. A client can do any of the following, and can ban a server from doing a particular action.

And that’s it. It’s so basic, you should never have to update the client. New features should be resisted. Thumbnails should be generated on the server – because there is time and bandwidth to simply get the whole file.

All with varying software on the server, but the same Stash client. There is no direct P2P, however several servers can coordinate, such that a controller server can ask a client to upload to another specific server. Such a service can pre-package the Stash client with specific servers. Then throughout the lifetime, the client server list can be updated with new servers.

I’m thinking of building this, but I’m in no rush. I’ll make it open source. Can you think of any other applications for such a general-purpose file sharing framework?

For more information, see https://bitbucket.org/merarischroeder/stasher/wiki/Home

Appendix

Security measures ideas:

  • [Future] Code Virtual Machine
    • Only System and VM namespaces used
    • VM namespace is a separate small DLL which interacts with the system { Files, Network, System Info }
    • It’s easier to verify that the VM component is safe in manual review.
    • It’s easy to automatically ensure the application is safe
    • Only relevant for feature-extended client, which will span multiple files and more
  • [Future] Security analyser works by decompiling the software – ideally a separate project

Remaining problems/opportunities:

  • Credit – who created that original photo showing on my desktop? They should get some sort of community credit, the more votes they get. Need some sort of separate/isolated server which takes a hash and signs/stores it with datetime and potentially also with extra meta-data such as author-name/alias
    • Reviewers, while not as important should also be able to have their work registered somewhere. If they review 1000 desktop backgrounds, that’s time. Flickr for example could make a backup of such credit. Their version of the ledger could be signed and dated by a similar process.
  • Executable files and malware – 
    • AntiVirus software on the client
    • Trusting that the server makes such checks – eg. looking inside non-executables even for payloads. ie. image file tails.
  • Hacked controller
    • File filters on the client to only allow certain file types (to exclude executable files) – { File extensions, Header Bytes }
    • HoneyPot Clients – which monitor activity, to detect changes in behavior of particular controllers
    • Human operator of controller types in a password periodically to assure that it’s still under their control. Message = UTCTimestamp + PrivateKeyEncrypt(UTCTimestamp), which is stored in memory.

Food Forever?

What if we could save our spoiling food before it was too far gone? I often have half a litre of milk which spoils at the office and I have to tip it down the sink.

I’m no biochemist, so I’m hoping this idea finds a nice home with a real scientist who either debunks it or points the way forward.

Could we have a home appliance which could UHT leftover milk that we can use later or donate?

Are there other foods which could be preserved in such a way? I’m guessing most would be an ultra heat process. Like an autoclave, you need to kill all the bacteria with no regard for taste. If it’s meat, it might be tough, but it would at least be a better pet food than what’s in a can.

Problem?

5 Secret Strategies for GovHack

Monday night I attended the VIC GovHack Connections Event. No there wasn’t any pizza.. but there were a selection of cheeses, artichokes and more.

Here are my Top 5 tips

1) Do something very different

This competition has been running for a number of years and the judges are seeing some similar patterns emerging. Browse through previous year’s hacker-space pages, and look at the type of projects they’ve had before. Look at the winners.

2) Evaluate the data

This might be the main aim of your project, but we want quality data for future years, and enough evidence to remove the unnecessary, find the missing, and refresh the old.

3) Prove real-time and live data

Melbourne City have their own feeds of real-time data this year. If you want to see more of that, consider using this data.

4) Simulate data

This strengthens your assessment of missing data [2], could involve a simulated live data feeds [3] above, and would be very different [1].

5) Gather data

This is actually a bit harder than simulating data [4], but very useful. You could use computer vision, web scraping, or make an open app (like OpenSignal) that many people install to collect data.

Off the record

I’ve got a few ideas for GovHack projects in mind on the day. I’m not competing, so come and talk to me on Friday night or Saturday for ideas along these lines.

Try Scope Catch Callback [TSCC] for ES6

So it has started, it wasn’t a hollow thought bubble, I have started the adventure beyond the C# nest [http://blog.alivate.com.au/leave-c-sharp/]. It will take a while, because I still have a lot of software that still runs on C#, and I do still like the language, but all new development will be on ES6 and NodeJS.

So I’m going to record my outlook over a few blog posts. I re-discovered Cloud9 IDE, and I’ve got a few thoughts on architecture and a new feature for ES6.

Today, I’ll tell the world about my proposed ES6 enhancement.

Despite the ECMAScript committee stating they are about “Standards at Internet Speed”, there isn’t much Internet tooling in there to make it happen. They have certainly been successful making rapid progress, but where does one submit an idea to the committee? There’s not even an email link. I’m certainly not going to cough up around $100k AUD to become a full member. [Update: They use GitHub, a link on their main website to this would be great. Also check out: https://twitter.com/ECMAScript]

So I’ll be satisfied to just put my first ES6 idea here.

Try blocks don’t work in a callback world. I’m sure there are libraries which could make this nicer. In C# Try blocks do work with the async language features for instance.

So here is some code which won’t catch an error

try
{
    $http.get(url).then((r) => {
        handleResponse(r);
    });
}
catch (e)
{
    console.log(e);
}

In this example, if there is an error during the HTTP request, it will go uncaught.

That was simple, though. How about a more complex situation?

function commonError(e) {
    console.log(e);
}

try
{
    runSQL(qry1, (result) => {
        doSomethingWith(result);
        runSQL(qry2, (result) => {
            doSomethingWith(result);
        }, commonError)
    },commonError);
}
catch (e)
{
    commonError(e);
}

Callback nesting isn’t very nice. This is why `await` is pushed forward as a good candidate. But what if the API you target doesn’t implement Promise? What if you only sometimes define a try block?

My proposal is to supply a method which gets the Try Scope Catch Callback [TSCC]. If you don’t return a promise, it would be like this:

function get(url, then, error) {
  var error | window.callback.getTryScopeCatchCallback(); //TSCC

  //error occurs:
  error(e); 

  //This could be reacting another 
  //try/catch block or as a result 
  //of callback from another error method
}

Promises already have a catch function in ES6. They’re so close! A Promise should direct its the error/catch callback to the TSCC by default. If the Promise spec was updated to include this, my first example of code above would have caught the error with no changes in code.

So what do you think ECMA members, can we get this into ECMAScript?

Feedback log – from es-discuss@mozilla.org maillist

  • kdex

Why not just transform callback-based APIs into `Promise`s and use (presumably ES2017)
`await`/`async` (which *does* support `try`/`catch`)?

e. g.:
“`js
try {
await curl(“example.com“);
/* success */
}
catch (e) {
/* error */
}

  • My response

1. Whether you await or not, the try scope’s catch callback [TSCC] should still be captured.

2. If there is no use of Promise (for coders own design reasons) the try scope’s catch callback [TSCC] should be available

GovHack – Do we need real-time feeds?

It’s the year 2016, and we still don’t know how many minutes away the next bus is in Geelong.

Public releases of data take time and effort, and unless they are routinely refreshed, they get stale. But there’s certain types of information that can’t be more than minutes old to be useful.

Traffic information is the most time sensitive. The current state of traffic lights, whether there are any signals currently out of order, and congestion information is already collected real-time in Australia. We could clearly benefit from such information being released as it happens.

But imagine this benchmark of up-to-the-minute was applied to all datasets. First of all you won’t have any aging data. But more importantly it would force the data publication to be automated, and therefore scalable so that instead of preparing another release of data, public servants could be focusing on the next type of data to make available.

What do you think?

Participate in GovHack this year, play with the data we do have and continue the conversation with us.

(I will be publishing a series of blogs focusing on GovHack, exploring opportunities and challenges that arise and consider while I work on the committee for the Geelong GovHack which runs 29-31 July 2016)

Image courtesy Alberto Otero García licensed under Creative Commons

GovHack – What tools will you use this year?

The world is always changing, and in the world of technology it seems to change faster.

You certainly want to win some of the fantastic prizes on offer, but remember, we want world changing ideas to drive real change for real people, and we can do that best together.

So share with us and your fierce competitors, which new tools and techniques you plan to use this year.

Some new popular that I’m aware of, include Kafka and MapMe.

Both of these feed into my own personal desire to capture more data and help Governments release data real-time. Check them out, and please comment below about any tools and techniques you plan to use this year.

(I will be publishing a series of blogs focusing on GovHack, exploring opportunities and challenges that arise and consider while I work on the committee for the Geelong GovHack which runs 29-31 July 2016)

Image courtesy RightBrainPhotography licensed under Creative Commons

What data do you want to see at GovHack?

Lets forget about any privacy and national security barriers for the moment. If you could have any data from Government what would you request?

GovHack is a great initiative which puts the spotlight on Government data. All of the departments and systems collect heaps of data every day, and lucky for us they’re starting to release some of it publicly.

You can already get topological maps, drainage points, bin locations, bbq locations, council budget data and much more. But that’s certainly not all the data they have.

Comment below on what data you would think is useful. It might already be released, but it would be interesting to go to Government with a nice long shopping list of data to be ready for us to delve into next year.

(I will be publishing a series of blogs focusing on GovHack, exploring opportunities and challenges that arise and consider while I work on the committee for the Geelong GovHack which runs 29-31 July 2016)

Image courtesy Catherine, licensed under Creative Commons

GovHack – How can we collect more data?

If we had all the cancer information from around the world, any keyboard warrior could wrangle the data and find helpful new discoveries. But we struggle to even complete a state-level database let alone a national or global one.

After being dazzled by the enormous amount of data already released by Government, you soon realise how much more you really need.

For starters, there are lots of paper records not even digital. This isn’t just a Government problem of course, many private organisations also grapple with managing unstructured written information on paper. But if Governments are still printing and storing paper in hard copy form; we further delay a fully open digital utopia. At the very least storing atomic data, separate to a merged and printed version enables future access, and stops the mindless discarding into the digital blackhole.

Then consider all the new types of data which could be collected. The routes that garbage delivery trucks and buses take and the economics of their operation. If we had such data streams, we could tell citizens if a bus is running ahead or behind. We could have GovHack participants calculate more efficient routes. Could buses collect rubbish? We need data to know. More data means more opportunities for solutions and improvement for all.

When you consider the colossal task ahead of Government, we must insist on changing culture so that data releases are considered a routine part of public service. And also make further data collection an objective, not a bonus extra. Until that happens, large banks of knowledge will remain locked up in fortresses of paper.

What do you think? Do you know of any forgotten archives of paper that would be useful for improving lives?

Participate in GovHack this year, play with the data we do have and continue the conversation with us.

(I will be publishing a series of blogs focusing on GovHack, exploring opportunities and challenges that arise and consider while I work on the committee for the Geelong GovHack which runs 29-31 July 2016)

Image courtesy Fryderyk Supinski licensed under Creative Commons

Why I want to leave C#

Startup performance is atrocious, critically, that slows down development. It’s slow to get the first page of a web application, navigating to whole new sections, and worst: initial Entity Framework LINQ queries.

It’s 2016, .Net is very mature but this problem persists. I love the C# language much more above Java, but when it comes to the crunch, the run-time performance is critical. Yes I was speaking of startup performance, but you encounter that in new areas of the software warming up and also when the AppPool is recycled (scheduled every 13 hours by default). Customers see that most, but it’s developers who must test and retest.

It wastes customers and developers time. Time means money but the hidden loss is focus. You finally get focused to work on a task, but then have to wait 30 seconds for an ASP.NET web page to load up so you can test something different. Even stopping your Debugging in VS can take 10s of seconds!

There are told ways to minimise such warmup problems, with native generation and EF query caching. Neither are a complete solution. And why workaround a problem not experienced in node.js and even PHP!

.Net and C# are primarily for business applications. So how important is it to optimise a loop over millions of records (for big data and science) over the user and developer experience of run and start with no delay?

Although I have been critical of Javascript as a language, recent optimisation are admirable. It has been optimised with priority for first-use speed, and critical sections are optimised as needed.

So unless Microsoft fix this problem once and for all, without requiring developers to coerce workarounds, they’re going to find long term dedicated coders such as myself shifting to Javascript, especially now that ECMAScript and TypeScript make Javascript infinitely more palateable.

I have already recently jettisoned EF in favour of a proprietary solution which I plan to open source. I also have plans for node.js and even my own IDE which I plan to lease. I’m even thinking of leaving the Managed world altogether – Heresy!

.Net has lots going for it, it’s mature and stable, but that’s not enough anymore. Can it be saved? I’m not sure.

My Patch for the Internet Security Hole

I just posted another article about the problem, but there are several steps which could be taken today to plug the hole. Although that won’t protect any historical communications. This involves doubling security with post-quantum cryptography (PQC) and also the use of a novel scheme that I propose here.

PQC can’t be used alone today, it’s not proven. Many of the algorithms used to secure internet communication today was thoroughly researched, peer reviewed, and tested. It stands the test of time. But PQC is relatively new, and although accelerated efforts could have been taken years ago to mature sooner, they were not. That doesn’t mean PQC doesn’t have a part to play today.

Encryption can be overlapped, yielding the benefits of both. RSA for example is very mature, but breakable by a Quantum Computer. Any PQC is immature, but theoretically unbreakable by a Quantum Computer. By combining these the benefits of both are gained, with additional CPU overhead. This should be implemented today.

Standards need to be fast tracked, and software vendors implement with haste. Only encapsulation is required, like a tunnel in a tunnel. But TLS likely already has the ability for dual-algorithm protection built into the protocol. I’m yet to find out.

In addition to the doubling described above, I have a novel approach, Whisp. Web applications (ignoring oAuth) store a hash of a password for each user, this hash can help to form a key to be used during symmetric encryption. Because symmetric encryption is also mature and unbreakable (even by Quantum Computer), it’s an even better candidate for doubling. But it would require some changes to Web Application login process, and has some unique disadvantages.

Traditionally, in a web application, a TLS session is started which secures the transmission of a login username and password. Under Whisp, the 100% secured TLS session would only be able to start after the user enters the password. The usual DH or RSA process is used to generate a symmetric key for the session, but then that key is processed further using the hash of the users’ password (likely with a hashing algorithm). Only if the user enters the correct password, will the secure tunnel be active and communication continue. There are still draw backs to this approach however.

The initial password still needs to be communicated to the server upon registration. So this would work well for all established user accounts, but creation of new user accounts would require additional protections (perhaps PQC doubling) when communicating a new password.

I would favor the former suggestion of PQC doubling, but there may well be good reasons to also use Whisp. And it shouldn’t be long before PQC can be relied upon on its own.

Busted! Internet Community Caught Unprepared

Internet Security (TLS) is no longer safe. That green HTTPS word, the golden padlock, all lies. The beneficiaries: trusted third parties who charge for certificates. Yes, it sounds like a scam, but not one actively peddled, this one is from complacency from the people who oversee the standards of the internet. Is there bribery involved? Who knows.

A month ago, there were no problems with TLS. Because it was only the 6th of October when a paper was published which paves the way to build machines which can break TLS. Update: Now a whole Q-computer architecture has been designed publically (what has been done privately?), and can be built under $1B. These machines are called Quantum Computers. So where’s the scam?

The nerds behind the Internet, knew long ago about the threat of developing such a machine. They also knew that new standards and processes could be built unbreakable even by a Quantum Computer. But what did they do? They sat on their hands.

I predicted in 2010 that it would take 5 years before a Quantum Computer would be feasible. I wasn’t specific about a mass production date. I was only 4 months out. Now it’s feasible for all your internet traffic to be spied on, including passwords, if the spy has enough money and expertise. But that’s not the worst part.

Your internet communication last year may be deciphered also. In fact, all of your internet traffic of the past, that you thought was safe, could be revealed, if an adversary was able to store it.

I wrote to Verisign in 2010 and asked them what they were doing about the looming Internet Emergency, and they brushed my concern aside. True, users have been secure to date, but they knew it was only a Security Rush. Like living in the moment and getting drunk, not concerned about tomorrow’s hangover, users have been given snake oil, a solution that evaporates only years later.

All of these years, money could have been poured into accelerated research. And there are solutions today, but they’re not tested well enough. But the least that could be done is a doubling of security. Have both the tried and tested RSA, as well as a new theoretically unbreakable encryption, in tandem.

Why is there still no reaction to the current security crisis? There are solid solutions that could be enacted today.

Updates

  • 2018-08-05: “If you have a secret today, don’t encrypt it with RSA if you believe quantum computing is coming.” —Matthias Troyer, Microsoft. see https://spectrum.ieee.org/view-from-the-valley/computing/hardware/quantum-computing-researchers-on-the-pace-of-development-managing-a-quantum-group-and-the-end-of-bitcoin
  • 2018-01-02: What if Shor’s algorithm isn’t optimal? What if the factors can be found using fewer Qubits? What if there is a completely different algorithm? Although Governments may already have a Quantum Computer with many more Qubits than expected, lowering the requirement is another way to advance quickly.
  • 2017-12-12: “applications in fields such as drug design and catalyst development are likely to materialize sooner, as they’re able to make use of smaller quantum computers with hundreds of qubits, compared to the thousands required to break cryptography” https://arstechnica.com/gadgets/2017/12/microsofts-q-quantum-programming-language-out-now-in-preview/
  • 2017-11-16: “We’re going to look back in history and say that [this five-year period] is when quantum computing emerged as a technology” “Gil believes quantum computing turned a corner during the past two years. Before that, we were in what he calls the era of quantum science” “But 2016 to 2021, he says, will be the era of “quantum readiness,” a period when the focus shifts to technology that will enable quantum computing to actually provide a real advantage”
  • 2017-06-29: Qubits hold superposition of two states. Quadits hold more than two, requiring less Quantum entangled particles. Less particles means less chance of decoherence and therefore earlier date of seeing a Quantum Computer silently cracking the internet’s encrypted secrets. If not already. see http://spectrum.ieee.org/tech-talk/computing/hardware/qudits-the-real-future-of-quantum-computing
  • 2017-05-26: “In a recent commentary in Nature, Martinis and colleagues estimated that a 100-million-qubit system would be needed to factor a 2,000-bit number—a not-uncommon public key length—in one day.” see http://spectrum.ieee.org/computing/hardware/google-plans-to-demonstrate-the-supremacy-of-quantum-computing
  • 2017-02-21: Here’s a great video which explains Quantum Computing and the maths behind it. They don’t quite realise the security threat today, but that’s ok, it’s a great video – https://www.youtube.com/watch?v=IrbJYsep45E
  • 2017-02-03: A feasible Q-computer architecture has been designed, with thorough public critique. see http://theconversation.com/how-we-created-the-first-ever-blueprint-for-a-real-quantum-computer-72290
  • 2016-07-09: Apparently Google heard me – http://arstechnica.com/security/2016/07/https-crypto-is-on-the-brink-of-collapse-google-has-a-plan-to-fix-it/. They’re focusing on the PQC named “Ring Learning With Errors”.
  • 2016-03-29: Another breakthrough reducing the amount of locial blocks for a swap. It’s clear that there’s a lot of interest and investment in Quantum Computing. Will this create an exponential cycle of discovery and additional funding/interest? Will Drug companies start to invest directly and more strongly? see http://www.cio.com.au/article/596836/quantum-computing-now-big-step-closer-thanks-new-breakthrough/

The Fraying of Communication and a proposed solution: Bind

In medicine the misinterpretation of a doctors notes could be deadly. I propose that the ambiguity, of even broader discourse, has a serious and undiscovered impact. This problem needs to be researched, and will be expounded further but I would like to explore a solution, which I hope will further open your understanding of the problem.

As with all effective communication, I’m going to name this problem: Fraying. For a mnemonic, consider the ends of a frayed string being one of the many misinterpretations.

His lie was exposed, covered in mud, he had to get away from his unresponsive betraying friend: the quick brown fox jumped over the lazy dog.

That’s my quick attempt of an example where context can be lost. What did the writer mean? What can a reader or machine algorithm misinterpret it to mean? Even with the preceding context, the final sentence can actually still be interpreted many ways. It’s frayed in a moderate way with minor impact.

In this example, it would be possible for the author to simply expound further on that final sentence, but that could ruin the rhythm for the reader (of that story). Another method, is to add such text in parenthesis. Either way, it’s a lot of additional effort by multiple parties. And particularly in business, we strive to distill our messages to be short, sharp and to the point.

My answer of course is a software solution, but one where plain text is still handled and human readable. It’s a simple extensible scheme, and again I name it: Bind (going with a string theme).

The quick [fast speed] brown fox [animal] jumped [causing lift] over [above] the lazy dog [animal]

With this form, any software can present the data. One with understanding of the scheme, can remove the square brackets if there is no facility for an optimized viewing experience. For example:

The quick brown fox jumped over the lazy dog

(Try putting your mouse over the lighter coloured words)

Since the invention of the computer and keyboard, such feats have been possible, but not simply, and certainly not mainstream.

So it would be important to proliferate a Binding text editor which is capable of capturing the intent of the writer.

The benefits of Binding go beyond solving Fray. They add more context for disability accessibility (I would argue Bind is classed as an accessibility feature – for normative people), and depending on how many words are Bound, even assist with language translation.

Imagine Google Translate with a Binding text editor, the translations would be much more accurate. Imagine Google search, where you type “Leave” and hover over the word and select [Paid or unpaid time off work], leaving you less encumbered with irrelevant results.

Such input for search and translation need not wait for people to manually bind historical writing. Natural Language Processing can bear most of the burden and when reviewing results, a human can review the meaning the computer imputed, and edit as needed.

We just need to be able to properly capture our thoughts, and I’m sure we’ll get the hang of it.

Hey, by the way, please add your own narrative ideas for “the quick brown fox jumped over the lazy dog”, what other stories can that sentence tell?

Appendix – Further Draft Specification of Bind:

Trailer MetaData Option:

  • Benefit: the metadata is decoupled visually from the plain text. This makes viewing on systems without support for the Bind metadata still tolerable for users.
  • Format: [PlainText][8x Tabs][JSON Data]
  • Json Schema: { BindVersion: 1, Bindings: […], (IdentifierType: “Snomed”) }
  • Binding Schema: { WordNumber: X, Name: “Z”, Identifier: “Y”, Length: 1}
  • Word Number: Word index, when words are delimited by whitespace and punctuation is trimmed.

Mixed MetaData Option:

  • When multiple preceding words are covered by the Binding, a number of dash indicates how many more words are covered. Bind Text: “John Smith [-Name]” indicates the two words “John Smith” are a Name.
  • The identifiers specified in ontological databases such as Snomed, may be represented with a final dash and then the identifier. Bind Text: “John Smith [-Name-415]” indicates a word definition identifier of 415, which may have a description of “A person’s name”.
  • When a square bracket is intended by the author, output a double square bracket. Bind Text: “John Smith [-Name] [[#123456]]” renders plainly to “John Smith [#123456]”

Music needs Intelligence and hard work

In all things, I believe a persons’ overall intelligence is the first factor which determines their performance. Some of the best sporting athletes find themselves running successful business ventures. The same goes for the best comedians. Of course hard work and training are necessary for any craft that an intelligent person applies themselves to, but good outcomes seldom happen by accident.

Today I stumbled across a YouType clip – Haywyre – Smooth Criminal and concluded that this was one smart guy, and at such a young age! This assumption was further supported by some brief research through other news articles about him. He has done most of the mastering of his albums and wouldn’t be surprised if he produced the YouTube video clip and website on his own too! When such intelligence collides with a focused hard work ethic this is what you get. Of the music articles so far written, I don’t think any of the writers have realized yet that they are writing about a genius just getting started.

His style definitely resonates with me, with his Jazz and Classical roots, but most importantly for me is the percussive expression that drives his compositions. Too many people will be captivated by the improvisation in the melody, but that’s only one layer of his complex compositions. If he’s still working solo, he will need to find good people to collaborate with into the future to reach his full potential. I hope Martin applies himself to other genres of music and other pursuits.

I happen to work in the software development industry, and found that it doesn’t matter how much schooling or experience someone has had, anyone can have their potential capped by their overall intelligence. One’s brain capacity is somewhat determined by genes, diet and early development. Once you have fully matured, there’s little or no ability to increase your brain power. That would be confronting for a lot of people who find themselves eclipsed by giants of thought.

So it’s no wonder why intelligence is seldom a measure of a person these days. Musicians are often praised as being talented for their good music, but that excludes all others: they must have some magical talent to succeed. As with creativity, the truth is less interesting, but very important. We should be pushing young children to develop intelligence, and value intelligence, not Hollywood “talent”. I suspect that valuing intelligence publicly, risks implying lack of intelligence for ineffective musicians (the same applying to other crafts).

Don’t let political politeness take over.

Digital Things

The “Internet of Things” is now well and truly established as a mainstream buzzword. The reason for its success could be explored at length, however this term is becoming overused, just like “Cloud”. The term has come to mean many different things to different people in different situations. “Things” works well to describe technology reaching smaller items, but “Internet” is only a component of a broader field that we can call Digital Things.

This Digital Things revolution is largely driven by the recent accessibility of tools, such as Arduino, Raspberry Bi and more. Miniaturization of computing that stretches even the definition of embedded computing. Millions of people are holding such tools in their hands wondering what to do with them. They all experience unique problems, and we see some amazing ideas emerge from these masses.

In health, the quantified self may eventually see information flow over the internet, but that’s not what all the fuss is about. Rather, it’s about Information from Things. Measuring as much as we can, with new sensors being the enablers of new waves of information. We want to collect this information and analyse it. Connecting these devices to the internet is certainly useful to collect and analyse this information.

Then there are many applications for the Control of Things. Driverless cars are generally not internet connected, neither are vacuum robots, burger building machines, a novel 100k colour pen or many many more things. It would seem the of the term Internet of Things as inspiration limits the possibilities.

In the end, Digital Things is the most suitable term to describe what we are seeing happen today. We are taking things in our lives which normally require manual work, and using embedded electronics to solve problems, whether it be for information or control, the internet is not always necessary.

Lets build some more Digital Things.

Geelong has a clean slate

I hope you’re done. Q&A was your last chance to detox from any doom and gloom you had left.

The loss of jobs, particularly at Ford, is not a pleasant experience for retrenched workers, but there’s no changing the past. The fact is Geelong now has a clean slate to dream big, and driverless electric vehicles is a perfect fit for the future of manufacturing.

On Q&A last night, Richard Marles was spot on, describing the automotive industry as one of our most advanced in supporting technical innovation in Australia. But ironically, the industry together has missed the boat and was always on a trajectory with disaster.

I have been watching the industry, since 2010. I have observed the emerging phenomenon of the electric vehicle and the needful but lack of interest by our local automotive industry.  I have realised any automation is to be embraced despite the unpleasant short-term job losses. And still we’re about to miss a huge opportunity.

The public forum is full of emotion, desperation, finger pointing, and frankly ignorance.

Geelong, we have a clean slate.

Kindly watch this video, https://www.youtube.com/watch?v=CqSDWoAhvLU, it’s all Geelong needs to drop the past and grasp the future, share it with your friends and call up all the politicians you know. It’s been there the whole time, and this vision for Geelong is all we need to forget our sorrows. You won’t understand unless you see the video. We need to act now.

I have covered Electric Vehicles comprehensively in the past, but they’re today’s reality. We need to aim higher. Do Geelong even know anything about driver-less cars?

People are immediately cautious of change, which is why the technology needs to be tested and tested here in Geelong. This will be a great focal point for our retraining efforts. Imagine cheap transport and independence for the elderly and disabled. Cheaper, safer and faster deliveries. Reduced traffic congestion and elimination of traffic lights – no stopping! Cars that drop you off and pick you up will park out of town – what car parking problem? What will we do with all those empty car park spaces in the city? More green plants and al fresco dining?

But most importantly zero road fatalities. If this is the only reason, it’s all we need.

They are legal in California today. What stepping stones will we take to legalise fully driverless cars in Victoria? These massive technology companies will only move next to hospitable markets. Who is talking to Nissan and Tesla about building the next generation of electric driverless vehicles in Geelong? We have been given a clean slate, there are too many exciting opportunities around to waste any more time on self-pity!

Oh and trust me when I say, that’s just the tip of the iceburg – I’m not telling you everything, find out for yourself. Click all the links found in this article for a start, it’s what they’re for.

Hint: There’s more to come from me, including the idea to start a “Manufacturing as a Service” company for Automotive, just like Foxconn does for electronics in China, inviting the Ford/Alcoa workers, their investment, GRIIF investment, outside investors and Tesla. There’s lots more work to do, but it’ll be worth it.

Some more videos you should really watch:


Lets leave Javascript behind

Disclaimer: I am sure Javascript will continue to be supported, and continue even to progress in features and support, regardless of any Managed Web. Some people simply love it, with all the flaws and pitfalls, like a sweet elderly married couple holding onto life for each other.

It’s great what the web industry is doing with ECMAScript, from version 6 we will finally see something resembling classes and modules. But isn’t that something the software industry have had for years? Why do we continue to handicap the web with an inferior language, when there have always been better options? Must we wait another 2-3 years before we get operator overloading in ECMAScript 7?

The .Net framework is a rich standardised framework with an Intermediate Language (IL). The compiler optimisations, toolset and importantly the security model, make it a vibrant and optimised ecosystem which could be leveraged. It could have been leveraged years ago with a bare minimum Mono CLR.

Google Chrome supports native code, however it runs in a separate process and calls to the DOM must be marshalled through inter-process communication methods. This is not ideal. If the native code support was in the same process it would be a good foundation for Mono.

I believe it is possible, perhaps even trivial, to achieve this nirvana of a Managed Web. We just need to take small considered steps to get there, so here’s my plan.

  1. Simple native code in the same process – Javascript is currently executed on the main thread, presumably through the window message pump executing delegates. These delegates can simply forward to managed function delegates. But first we should be able to trigger an alert window through native code which is compiled inside the Google Chrome code base.
  2. Simple mono support – Fire up Mono, provide enough support in a Base Class Library (BCL) for triggering an alert. This time there will be an IL DLL with a class which implements an Interface for start-up.
  3. Fuller API – With the simple milestones above completed, a complete BCL API can be designed and implemented.
  4. Optimisations – For example, enumerating the DOM may be slowed by crossing the Managed/Unmanaged boundary? jQuery-like functions could be implemented in native code and exposed through the BCL.

Along the way, other stacks and browsers could also leverage our work, establishing support for at least Java as well.

Example API:

IStartup

  • void Start(IWindow window) – Called when the applet is first loaded, just like when Javascript is first loaded (For javascript there isn’t an event, it simply starts executing the script from the first line)

IWindow
see http://www.w3schools.com/jsref/obj_window.asp

IDocument
see http://www.w3schools.com/jsref/dom_obj_document.asp

Warm up – Possible disadvantage

Javascript can be interpreted straight away, and there are several levels of optimisation applied only where needed, favouring fast execution time. IL would need to be JIT’d which would be a relatively slow process, but there’s no reason why it cannot be AOT compiled by the web server. Still I see this as the biggest disadvantage that needs to be front of mind.

Other people around the web who want this

http://tirania.org/blog/archive/2012/Sep-06.html

 

Enhanced by Zemanta

University BC

University is becoming increasingly irrelevant for the IT industry.

It’s 3 years of full-time study, yet in a month, a talented 12 year old kid can write an app that makes him a millionaire. Course content is always lagging behind, not for lack of pushing by academics and industry, the bureaucracy of the system drags. With content such as teamtreehouse.com on the up, there is potential for real upset in the IT education market. And without any entrepreneurship support, there is no excitement and potential to build something meaningful from nothing. Increasingly universities will be perceived as the old way, by new students as well as industry looking to hire.

I would like to see cadet-ships for IT, with online content and part-time attendance to training institutions for professional development and assessment. Although even assessment is questionable: Students are not provided access to the internet during assessments, which does not reflect any true-to-life scenario. I value a portfolio of code over grades.

I seek individuals who have experience in Single Page Applications (SPA), Knockout.js, javascript, jquery, Entity Framework,

C#, 2D drawing, graphic art, SQL (Window Functions). Others are looking for Ruby on Rails developers. All of my recent graduates have very limited exposure to any of these.

I could be wrong, but if I am right, institutions who ignore such facts are only going to find out the hard way. I know the IT industry has been reaching out to Universities to help keep them relevant, it’s time for Universities to reach back out to the industry, and relinquish some of their control for the benefit of both students and the industry.

Enhanced by Zemanta

Inverse Templates

Hackathon project – Coming soon….

[Start Brief]
Writing open source software is fun, but to get recognition and feedback you need to finish and promote it. Todd, founder of Alivate, has completed most of the initial parts of a new open source project “Inverse Templates”, including most of the content below, and will work with this week’s hackathon group to publish it as an isolated open source project, and NuGet package.

Skills to learn: Code Templating, Code Repositories, NuGet Packages, Lambda, Text Parsing.
Who: Anyone from High School and up is encouraged to come.

We will also be able to discuss future hackathon topics and schedule. Don’t forget to invite all of your hacker friends!

Yes, there will be Coke and Pizza, donated by Alivate.
[End Brief]

The Problem

Many template engines re-invent the wheel. Supporting looping logic, sub-template and many other features. Any control code is also awkward, and extensive use makes template files look confusing to first-time-users.

So why have yet another template engine, when instead you can simply leverage the coding language of your choice, and your skills and experience hard fought?

The Solution

Normal template engines have output content (HTML for example) as the first class citizen, with variables and control code being second class. Inverse Template systems are about reversing this. By using the block commenting feature, of at least C-Like languages, Inverse Template systems let you leverage the full power of your programming language.

At the moment we only have a library for C# Inverse Templates. (Search for the NuGet Package, or Download and reference the latest stable DLL)

Need a loop? Then use a for, foreach, while, and more.
Sub-templating? Call a function, whether it’s in the same code-file, in another object, static or something more exotic.

Introductory Examples

Example T4:

Introductions:
<# foreach (var Person in People) { #>
Hello <#= Person.Name #>, great to iterate you!
<# } #>

Example Inverse Template:

/*Introductions:*/
foreach (var Person in People) {
/*
Hello */w(Person.Name);/*, great to iterate you!*/
}

As you can see, we have a function named w, which simply writes to the output file. More functions are defined for tabbing, and being an Inverse Template you can inherit the InverseTemplate object and extend as you need! These functions are named with a single character, so they aren’t too imposing.

Pre-Processing
As with T4 pre-processing, Inverse Template files are also pre-processed, converting comment blocks into code, then saved as a new file which can be compiled and debugged. Pre-processing as opposed to interpreted templates are required, as we rely on the compiler to compile the control code. Furthermore, there are performance benefits to pre-processed (and compiled) templates, as opposed to interpreted.

Example pre-processed Inverse Template:

l("Introductions:");
foreach (var Person in People) {
  n("");
  t("Hello ");w(Person.Name);
}

Function l, will output any tabbing, then content, then line-ending
Function n, will output the content followed by line-ending
Function t, will prefix any tabbing followed by content

The pre-processor will find and process all files in a given folder hierarchy ending with “.ct.cs”. The pre-processor is an external console application, so that it will even work with Express editions of Visual Studio.

You should:

  • Put all of your Definitions into folder .\InverseTemplates\Definitions\, sub-folders are ok
  • Actively exclude and then re-include the generated .\InverseTemplates\Processed\, folder after pre-processing
  • Exclude the Definitions folder before you compile/run your project

Not the answer to all your problems

I’m not claiming the Inverse Templates are the ultimate solution. They’re simply not. If you have a content heavy templates with no control code and minimal variable merging, then perhaps you want to just use T4.

Also, you may find that you’re more comfortable using all of the InverseTemplate functions directly {l,n,w,t}, instead of using comment blocks. In some cases this can look more visually appealing, and then you can bypass the pre-processing step. This could be particularly true of templates where you have lots of control code and minimal content.

But then again, keep in mind that your code-editor will be able to display a different colour for comment blocks. And perhaps in the future your code-editor may support InverseTemplates using a different syntax highlighter inside your comment blocks.

For a lot of work I do, I’ll be using Inverse Templates. I will have the full power of the C# programming language, and won’t need to learn the syntax of another template engine.

I’m even thinking of using it as a dynamic rendering engine for web, but that’s more of a curiosity than anything.

Advanced Example – Difference between Function, Generate and FactoryGet

class TemplateA : InverseTemplate {
  public override Generate() {
    /*This will be output first, no line-break here.*/
    FunctionC(); //A simple function call, I suggest using these most often, mainly to simplify your cohesive template, when function re-use is unlikely.
    Generate(); //This is useful when there is some function re-use, or perhaps you want to contain your generation into specific files in a particular structure
    IMySpecial s = FactoryGet("TemplateD"); //This is useful for more advanced situations which require either a search by interface implementation, with optional selection of a specific implementation by class name.
    s.SpecificFunction("third");
  }
  private void FunctionC() {
    /*
    After a line-break, the is now the second line, with a line-break.
    */
  }
}
class TemplateB : InverseTemplate {
  public override Generate() {
    /*This will be the third line.*/
  }
}
interface IMySpecial
{
  void SpecificFunction(string SpecificParameter);
}
class TemplateD : InverseTemplate, IMySpecial
{
  public SpecificFunction(string SpecificParameter) {
    /* This will follow on from the the */w(SpecificParameter);/*line.
    */
  }
}
class TemplateF : InverseTemplate, IMySpecial
{
  //Just to illustrate that there could be multiple classes implementing the specialised interface
}

Advanced – Indenting

All indent is handled as spaces, and is tracked using a stack structure.

pushIndent(Amount) will increase the indent by the amount you specify, if no parameter is specified, the default is 4 spaces.
popIndent will pop the amount of indent on the stack, the last amount pushed.
withIndent(Amount, Action) will increase the indent only for the duration of the specified action.

Example:

withIndent(8, () => {
  /*This will be indented by 8 spaces.
  And so will this, on the next line.
  I recommend you only use this when calling a function.*/
});
/*This will not be indented.*/
/*Within a single function you should
    control your indent manually with spaces.*/
if (1 == 1) {
/*
    it will be easier to see compared to calls to any of the indent functions {pushIndent, withIndent, etc..}*/
  if (2 == 2) {
/*
    just keep your open-comment-block marker anchored in-line with the rest*/
  }
}

These are all the base strategies that I currently use across my Inverse Templates. I also inherit InverseTemplate and make use of the DataContext, but you’ll have to wait for another time before I explain that in more detail.

Enhanced by Zemanta

IL to SQL

There are some cases where one needs to perform some more complex processing, necessitating Application-side processing OR custom SQL commands for better performance. For example, splitting one column of comma delimited data into 3 other columns:

public virtual void DoEntityXSplit()
{
  var NeedSplitting = db.EntityXs.Where(x => !x.Splitted1.HasValue);
  foreach (var item in NeedSplitting)
  {
    string[] split = item.DelimitedField.Split(',');
    item.Splitted1 = split[0];
    item.Splitted2 = split[1];
    item.Splitted3 = split[2];
    item.Save(); //THIS...
  }
  db.SaveChanges(); //OR this
}

When you run DoEntityXSplit, the unoptimised code may run. However if supported, automatic optimisation is possible derived from the IL (Instruction Language – aka. .Net bytecode) body of the method, when:
i) The ORM (Object Relational Modelling – eg. nHibernate / EntityFramework) supports some sort of “ILtoSQL” compilation at all; and
ii) The function doesn’t contain any unsupported patterns or references, then the raw SQL may be run. This could include the dynamic creation of a stored procedure even for even faster operation.

public override void DoEntityXSplit()
{
  //This is pseudo SQL code
  db.RunQuery("
    declare cursor @NeedSplitting as (
      select ID, DelimitedField
      from EntityXs
      where Splitted1 is null
    );

    open @NeedSplitting;
    fetch next from @NeedSplitting into @ID, @DelimitedField
    while (@StillmoreRecords)
    begin
      @Splitted1 = fn_CSharpSplit(@DelimitedField, ',', 0)
      @Splitted2 = fn_CSharpSplit(@DelimitedField, ',', 1)
      @Splitted3 = fn_CSharpSplit(@DelimitedField, ',', 2)

      update EntityX
      set Splitted1 = @Splitted1,
      Splitted2 = @Splitted2,
      Splitted3 = @Splitted3
      where ID = @ID

      fetch next from @NeedSplitting into @DelimitedField
    end
  ");
}

of course this could also be compiled to

override void DoEntityXSplit()
{
  //This is pseudo SQL code
  db.RunQuery("
    update EntityX
    set Splitted1 = fn_CSharpSplit(@DelimitedField, ',', 0),
    Splitted2 = fn_CSharpSplit(@DelimitedField, ',', 1)
    Splitted3 = fn_CSharpSplit(@DelimitedField, ',', 2)
    where Splitted1 is null
  ");
}

but I wouldn’t expect that from version 1 or would I?

Regardless, one should treat IL as source code for a compiler which has optimisations for T-SQL output. The ORM mappings would need to be read to resolve IL properties/fields to SQL fields. It may sounds crazy, but it’s definitely achievable and this project looks like a perfect fit for such a feat.

Where will BLToolKit be in 10 years? I believe ILtoSQL should be a big part of that future picture.

If I get time, I’m keen to have a go, it should be built as a standalone dll which an ORM can leverage. Who knows maybe EF will pick this up?

Enhanced by Zemanta

Poppler for Windows

I have been using the Poppler library for some time, over a series of various projects. It’s an open source set of libraries and command line tools, very useful for dealing with PDF files. Poppler is targeted primarily for the Linux environment, but the developers have included Windows support as well in the source code. Getting the executables (exe) and/or dlls for the latest version however is very difficult on Windows. So after years of pain, I jumped on oDesk and contracted Ilya Kitaev, to both compile with Microsoft Visual Studio, and also prepare automated tools for easy compiling in the future. Update: MSVC isn’t very well supported, these days the download is based off MinGW.

So now, you can run the following utilities from Windows!

  • PDFToText – Extract all the text from PDF document. I suggest you use the -Layout option for getting the content in the right order.
  • PDFToHTML – Which I use with the -xml option to get an XML file listing all of the text segments’ text, position and size, very handy for processing in C#
  • PDFToCairo – For exporting to images types, including SVG!
  • Many more smaller utilities

Download

Latest binary : poppler-0.68.0_x86

Older binaries:
poppler-0.67.0_x86
poppler-0.58.0_x86
poppler-0.57.0_x86
poppler-0.55.0_x86
poppler-0.54_x86
poppler-0.51_x86.7z
poppler-0.50_x86.7z
poppler-0.49_x86.7z
poppler-0.48_x86.7z
poppler-0.47_x86.7z
poppler-0.45_x86.7z
poppler-0.44_x86
poppler-0.42.0_x86.7z
poppler-0.41.0_x86.7z
poppler-0.40.0_x86.7z
poppler-0.37_x86.7z
poppler-0.36.7z
poppler-0.35.0.7z
poppler-0.34.0.7z
poppler-0.33.0.7z
poppler-0.26.4.7z
poppler-0.26.3.7z
poppler-0.26.1_x86
poppler-0.26.1
poppler-0.22.0
poppler-0.18.1

Update [2018-08-29]:

Windows Subsystem for Linux (WSL) is a great option for many windows users and developers. You can enable WSL and install Ubuntu if you are not using the “S” edition of Windows. Then you can simply install “sudo apt install poppler-utils”. If you’re a developer, you can still start the ubuntu based poppler tool(s) using the wsl command: “wsl pdftocairo …”

As it turns out though, your poppler version will be limited to a given distribution of Ubuntu at the time. 18.04 uses poppler 0.62. So in some ways our work on windows compiling can be better – it gives you the latest version.

If you need perfect support for QT and other missing features of our mingw Windows built version, then WSL might be the best way to go. I’m guessing the WSL/Ubuntu version is 64 bit for instance.

It would be nice if the Poppler team built a mmap IPC convention for processing PDF files. That way the process (either WSL or Mingw) could continue running, and process PDFs based on requests received, and returning the output to the caller. Like a server. It could also be much simpler, if it just ran as a simple web server.