Stasher – File Sharing with Customer Service

(This is quite a technical software article, written with software coders in mind)

It’s time for a new file sharing protocol. P2P in general is no longer relevant as a concept, and central filesharing sites show that consumers are happy with centralised systems with a web interface. I think I have a good idea for the next incremental step, but first some history.

It’s interesting that P2P has died down so much. There was Napster and other successes which followed, but BitTorrent seems to have ruled them all. File discovery was lost, and with Universal Plug and Play a big security concern, even re-uploading is not on by default.

P2P is no longer needed. It was so valuable before, because it distributed the upload bandwidth, and also anonymised somewhat. But bandwidth continues to fall in price. MegaUpload and other like it were actually the next generation, and added some customer service around the management of the files, and charged for premium service. Dropbox and others have sort of carved out even more again.

Stash (which is hopefully not trademarked), is my concept to bring back discovery. It’s a different world, where many use VPNs and even Tor, so we don’t need to worry about security and anonymity.

It’s so simple, it’s easy to trust. With only a few hundred lines of code in a single file, one can compile their own, on Windows in seconds. So there can be no hidden backdoors. Those users who can’t be bothered with that, can download the application from a trusted source.

It works by being ridiculously simple. A dumb application which runs on your computer, is set-up to point to one or more servers. It only operates on one folder, the one it resides in. From there the servers control Stasher. A client can do any of the following, and can ban a server from doing a particular action.

And that’s it. It’s so basic, you should never have to update the client. New features should be resisted. Thumbnails should be generated on the server – because there is time and bandwidth to simply get the whole file.

All with varying software on the server, but the same Stash client. There is no direct P2P, however several servers can coordinate, such that a controller server can ask a client to upload to another specific server. Such a service can pre-package the Stash client with specific servers. Then throughout the lifetime, the client server list can be updated with new servers.

I’m thinking of building this, but I’m in no rush. I’ll make it open source. Can you think of any other applications for such a general-purpose file sharing framework?

For more information, see https://bitbucket.org/merarischroeder/stasher/wiki/Home

Appendix

Security measures ideas:

  • [Future] Code Virtual Machine
    • Only System and VM namespaces used
    • VM namespace is a separate small DLL which interacts with the system { Files, Network, System Info }
    • It’s easier to verify that the VM component is safe in manual review.
    • It’s easy to automatically ensure the application is safe
    • Only relevant for feature-extended client, which will span multiple files and more
  • [Future] Security analyser works by decompiling the software – ideally a separate project

Remaining problems/opportunities:

  • Credit – who created that original photo showing on my desktop? They should get some sort of community credit, the more votes they get. Need some sort of separate/isolated server which takes a hash and signs/stores it with datetime and potentially also with extra meta-data such as author-name/alias
    • Reviewers, while not as important should also be able to have their work registered somewhere. If they review 1000 desktop backgrounds, that’s time. Flickr for example could make a backup of such credit. Their version of the ledger could be signed and dated by a similar process.
  • Executable files and malware – 
    • AntiVirus software on the client
    • Trusting that the server makes such checks – eg. looking inside non-executables even for payloads. ie. image file tails.
  • Hacked controller
    • File filters on the client to only allow certain file types (to exclude executable files) – { File extensions, Header Bytes }
    • HoneyPot Clients – which monitor activity, to detect changes in behavior of particular controllers
    • Human operator of controller types in a password periodically to assure that it’s still under their control. Message = UTCTimestamp + PrivateKeyEncrypt(UTCTimestamp), which is stored in memory.

Food Forever?

What if we could save our spoiling food before it was too far gone? I often have half a litre of milk which spoils at the office and I have to tip it down the sink.

I’m no biochemist, so I’m hoping this idea finds a nice home with a real scientist who either debunks it or points the way forward.

Could we have a home appliance which could UHT leftover milk that we can use later or donate?

Are there other foods which could be preserved in such a way? I’m guessing most would be an ultra heat process. Like an autoclave, you need to kill all the bacteria with no regard for taste. If it’s meat, it might be tough, but it would at least be a better pet food than what’s in a can.

Problem?

5 Secret Strategies for GovHack

Monday night I attended the VIC GovHack Connections Event. No there wasn’t any pizza.. but there were a selection of cheeses, artichokes and more.

Here are my Top 5 tips

1) Do something very different

This competition has been running for a number of years and the judges are seeing some similar patterns emerging. Browse through previous year’s hacker-space pages, and look at the type of projects they’ve had before. Look at the winners.

2) Evaluate the data

This might be the main aim of your project, but we want quality data for future years, and enough evidence to remove the unnecessary, find the missing, and refresh the old.

3) Prove real-time and live data

Melbourne City have their own feeds of real-time data this year. If you want to see more of that, consider using this data.

4) Simulate data

This strengthens your assessment of missing data [2], could involve a simulated live data feeds [3] above, and would be very different [1].

5) Gather data

This is actually a bit harder than simulating data [4], but very useful. You could use computer vision, web scraping, or make an open app (like OpenSignal) that many people install to collect data.

Off the record

I’ve got a few ideas for GovHack projects in mind on the day. I’m not competing, so come and talk to me on Friday night or Saturday for ideas along these lines.

Try Scope Catch Callback [TSCC] for ES6

So it has started, it wasn’t a hollow thought bubble, I have started the adventure beyond the C# nest [http://blog.alivate.com.au/leave-c-sharp/]. It will take a while, because I still have a lot of software that still runs on C#, and I do still like the language, but all new development will be on ES6 and NodeJS.

So I’m going to record my outlook over a few blog posts. I re-discovered Cloud9 IDE, and I’ve got a few thoughts on architecture and a new feature for ES6.

Today, I’ll tell the world about my proposed ES6 enhancement.

Despite the ECMAScript committee stating they are about “Standards at Internet Speed”, there isn’t much Internet tooling in there to make it happen. They have certainly been successful making rapid progress, but where does one submit an idea to the committee? There’s not even an email link. I’m certainly not going to cough up around $100k AUD to become a full member. [Update: They use GitHub, a link on their main website to this would be great. Also check out: https://twitter.com/ECMAScript]

So I’ll be satisfied to just put my first ES6 idea here.

Try blocks don’t work in a callback world. I’m sure there are libraries which could make this nicer. In C# Try blocks do work with the async language features for instance.

So here is some code which won’t catch an error

try
{
    $http.get(url).then((r) => {
        handleResponse(r);
    });
}
catch (e)
{
    console.log(e);
}

In this example, if there is an error during the HTTP request, it will go uncaught.

That was simple, though. How about a more complex situation?

function commonError(e) {
    console.log(e);
}

try
{
    runSQL(qry1, (result) => {
        doSomethingWith(result);
        runSQL(qry2, (result) => {
            doSomethingWith(result);
        }, commonError)
    },commonError);
}
catch (e)
{
    commonError(e);
}

Callback nesting isn’t very nice. This is why `await` is pushed forward as a good candidate. But what if the API you target doesn’t implement Promise? What if you only sometimes define a try block?

My proposal is to supply a method which gets the Try Scope Catch Callback [TSCC]. If you don’t return a promise, it would be like this:

function get(url, then, error) {
  var error | window.callback.getTryScopeCatchCallback(); //TSCC

  //error occurs:
  error(e); 

  //This could be reacting another 
  //try/catch block or as a result 
  //of callback from another error method
}

Promises already have a catch function in ES6. They’re so close! A Promise should direct its the error/catch callback to the TSCC by default. If the Promise spec was updated to include this, my first example of code above would have caught the error with no changes in code.

So what do you think ECMA members, can we get this into ECMAScript?

Feedback log – from [email protected] maillist

  • kdex

Why not just transform callback-based APIs into `Promise`s and use (presumably ES2017)
`await`/`async` (which *does* support `try`/`catch`)?

e. g.:
“`js
try {
await curl(“example.com“);
/* success */
}
catch (e) {
/* error */
}

  • My response

1. Whether you await or not, the try scope’s catch callback [TSCC] should still be captured.

2. If there is no use of Promise (for coders own design reasons) the try scope’s catch callback [TSCC] should be available

GovHack – Do we need real-time feeds?

It’s the year 2016, and we still don’t know how many minutes away the next bus is in Geelong.

Public releases of data take time and effort, and unless they are routinely refreshed, they get stale. But there’s certain types of information that can’t be more than minutes old to be useful.

Traffic information is the most time sensitive. The current state of traffic lights, whether there are any signals currently out of order, and congestion information is already collected real-time in Australia. We could clearly benefit from such information being released as it happens.

But imagine this benchmark of up-to-the-minute was applied to all datasets. First of all you won’t have any aging data. But more importantly it would force the data publication to be automated, and therefore scalable so that instead of preparing another release of data, public servants could be focusing on the next type of data to make available.

What do you think?

Participate in GovHack this year, play with the data we do have and continue the conversation with us.

(I will be publishing a series of blogs focusing on GovHack, exploring opportunities and challenges that arise and consider while I work on the committee for the Geelong GovHack which runs 29-31 July 2016)

Image courtesy Alberto Otero García licensed under Creative Commons

GovHack – What tools will you use this year?

The world is always changing, and in the world of technology it seems to change faster.

You certainly want to win some of the fantastic prizes on offer, but remember, we want world changing ideas to drive real change for real people, and we can do that best together.

So share with us and your fierce competitors, which new tools and techniques you plan to use this year.

Some new popular that I’m aware of, include Kafka and MapMe.

Both of these feed into my own personal desire to capture more data and help Governments release data real-time. Check them out, and please comment below about any tools and techniques you plan to use this year.

(I will be publishing a series of blogs focusing on GovHack, exploring opportunities and challenges that arise and consider while I work on the committee for the Geelong GovHack which runs 29-31 July 2016)

Image courtesy RightBrainPhotography licensed under Creative Commons

What data do you want to see at GovHack?

Lets forget about any privacy and national security barriers for the moment. If you could have any data from Government what would you request?

GovHack is a great initiative which puts the spotlight on Government data. All of the departments and systems collect heaps of data every day, and lucky for us they’re starting to release some of it publicly.

You can already get topological maps, drainage points, bin locations, bbq locations, council budget data and much more. But that’s certainly not all the data they have.

Comment below on what data you would think is useful. It might already be released, but it would be interesting to go to Government with a nice long shopping list of data to be ready for us to delve into next year.

(I will be publishing a series of blogs focusing on GovHack, exploring opportunities and challenges that arise and consider while I work on the committee for the Geelong GovHack which runs 29-31 July 2016)

Image courtesy Catherine, licensed under Creative Commons

GovHack – How can we collect more data?

If we had all the cancer information from around the world, any keyboard warrior could wrangle the data and find helpful new discoveries. But we struggle to even complete a state-level database let alone a national or global one.

After being dazzled by the enormous amount of data already released by Government, you soon realise how much more you really need.

For starters, there are lots of paper records not even digital. This isn’t just a Government problem of course, many private organisations also grapple with managing unstructured written information on paper. But if Governments are still printing and storing paper in hard copy form; we further delay a fully open digital utopia. At the very least storing atomic data, separate to a merged and printed version enables future access, and stops the mindless discarding into the digital blackhole.

Then consider all the new types of data which could be collected. The routes that garbage delivery trucks and buses take and the economics of their operation. If we had such data streams, we could tell citizens if a bus is running ahead or behind. We could have GovHack participants calculate more efficient routes. Could buses collect rubbish? We need data to know. More data means more opportunities for solutions and improvement for all.

When you consider the colossal task ahead of Government, we must insist on changing culture so that data releases are considered a routine part of public service. And also make further data collection an objective, not a bonus extra. Until that happens, large banks of knowledge will remain locked up in fortresses of paper.

What do you think? Do you know of any forgotten archives of paper that would be useful for improving lives?

Participate in GovHack this year, play with the data we do have and continue the conversation with us.

(I will be publishing a series of blogs focusing on GovHack, exploring opportunities and challenges that arise and consider while I work on the committee for the Geelong GovHack which runs 29-31 July 2016)

Image courtesy Fryderyk Supinski licensed under Creative Commons

Why I want to leave C#

Startup performance is atrocious, critically, that slows down development. It’s slow to get the first page of a web application, navigating to whole new sections, and worst: initial Entity Framework LINQ queries.

It’s 2016, .Net is very mature but this problem persists. I love the C# language much more above Java, but when it comes to the crunch, the run-time performance is critical. Yes I was speaking of startup performance, but you encounter that in new areas of the software warming up and also when the AppPool is recycled (scheduled every 13 hours by default). Customers see that most, but it’s developers who must test and retest.

It wastes customers and developers time. Time means money but the hidden loss is focus. You finally get focused to work on a task, but then have to wait 30 seconds for an ASP.NET web page to load up so you can test something different. Even stopping your Debugging in VS can take 10s of seconds!

There are told ways to minimise such warmup problems, with native generation and EF query caching. Neither are a complete solution. And why workaround a problem not experienced in node.js and even PHP!

.Net and C# are primarily for business applications. So how important is it to optimise a loop over millions of records (for big data and science) over the user and developer experience of run and start with no delay?

Although I have been critical of Javascript as a language, recent optimisation are admirable. It has been optimised with priority for first-use speed, and critical sections are optimised as needed.

So unless Microsoft fix this problem once and for all, without requiring developers to coerce workarounds, they’re going to find long term dedicated coders such as myself shifting to Javascript, especially now that ECMAScript and TypeScript make Javascript infinitely more palateable.

I have already recently jettisoned EF in favour of a proprietary solution which I plan to open source. I also have plans for node.js and even my own IDE which I plan to lease. I’m even thinking of leaving the Managed world altogether – Heresy!

.Net has lots going for it, it’s mature and stable, but that’s not enough anymore. Can it be saved? I’m not sure.

My Patch for the Internet Security Hole

I just posted another article about the problem, but there are several steps which could be taken today to plug the hole. Although that won’t protect any historical communications. This involves doubling security with post-quantum cryptography (PQC) and also the use of a novel scheme that I propose here.

PQC can’t be used alone today, it’s not proven. Many of the algorithms used to secure internet communication today was thoroughly researched, peer reviewed, and tested. It stands the test of time. But PQC is relatively new, and although accelerated efforts could have been taken years ago to mature sooner, they were not. That doesn’t mean PQC doesn’t have a part to play today.

Encryption can be overlapped, yielding the benefits of both. RSA for example is very mature, but breakable by a Quantum Computer. Any PQC is immature, but theoretically unbreakable by a Quantum Computer. By combining these the benefits of both are gained, with additional CPU overhead. This should be implemented today.

Standards need to be fast tracked, and software vendors implement with haste. Only encapsulation is required, like a tunnel in a tunnel. But TLS likely already has the ability for dual-algorithm protection built into the protocol. I’m yet to find out.

In addition to the doubling described above, I have a novel approach, Whisp. Web applications (ignoring oAuth) store a hash of a password for each user, this hash can help to form a key to be used during symmetric encryption. Because symmetric encryption is also mature and unbreakable (even by Quantum Computer), it’s an even better candidate for doubling. But it would require some changes to Web Application login process, and has some unique disadvantages.

Traditionally, in a web application, a TLS session is started which secures the transmission of a login username and password. Under Whisp, the 100% secured TLS session would only be able to start after the user enters the password. The usual DH or RSA process is used to generate a symmetric key for the session, but then that key is processed further using the hash of the users’ password (likely with a hashing algorithm). Only if the user enters the correct password, will the secure tunnel be active and communication continue. There are still draw backs to this approach however.

The initial password still needs to be communicated to the server upon registration. So this would work well for all established user accounts, but creation of new user accounts would require additional protections (perhaps PQC doubling) when communicating a new password.

I would favor the former suggestion of PQC doubling, but there may well be good reasons to also use Whisp. And it shouldn’t be long before PQC can be relied upon on its own.

Busted! Internet Community Caught Unprepared

Internet Security (TLS) is no longer safe. That green HTTPS word, the golden padlock, all lies. The beneficiaries: trusted third parties who charge for certificates. Yes, it sounds like a scam, but not one actively peddled, this one is from complacency from the people who oversee the standards of the internet. Is there bribery involved? Who knows.

A month ago, there were no problems with TLS. Because it was only the 6th of October when a paper was published which paves the way to build machines which can break TLS. These machines are called Quantum Computers. So where’s the scam?

The nerds behind the Internet, knew long ago about the threat of developing such a machine. They also knew that new standards and processes could be built unbreakable even by a Quantum Computer. But what did they do? They sat on their hands.

I predicted in 2010 that it would take 5 years before a Quantum Computer would be feasible. I wasn’t specific about a mass production date. I was only 4 months out. Now it’s feasible for all your internet traffic to be spied on, including passwords, if the spy has enough money and expertise. But that’s not the worst part.

Your internet communication last year may be deciphered also. In fact, all of your internet traffic of the past, that you thought was safe, could be revealed, if an adversary was able to store it.

I wrote to Verisign in 2010 and asked them what they were doing about the looming Internet Emergency, and they brushed my concern aside. True, users have been secure to date, but they knew it was only a Security Rush. Like living in the moment and getting drunk, not concerned about tomorrow’s hangover, users have been given snake oil, a solution that evaporates only years later.

All of these years, money could have been poured into accelerated research. And there are solutions today, but they’re not tested well enough. But the least that could be done is a doubling of security. Have both the tried and tested RSA, as well as a new theoretically unbreakable encryption, in tandem.

Why is there still no reaction to the current security crisis? There are solid solutions that could be enacted today.

Updates

The Fraying of Communication and a proposed solution: Bind

In medicine the misinterpretation of a doctors notes could be deadly. I propose that the ambiguity, of even broader discourse, has a serious and undiscovered impact. This problem needs to be researched, and will be expounded further but I would like to explore a solution, which I hope will further open your understanding of the problem.

As with all effective communication, I’m going to name this problem: Fraying. For a mnemonic, consider the ends of a frayed string being one of the many misinterpretations.

His lie was exposed, covered in mud, he had to get away from his unresponsive betraying friend: the quick brown fox jumped over the lazy dog.

That’s my quick attempt of an example where context can be lost. What did the writer mean? What can a reader or machine algorithm misinterpret it to mean? Even with the preceding context, the final sentence can actually still be interpreted many ways. It’s frayed in a moderate way with minor impact.

In this example, it would be possible for the author to simply expound further on that final sentence, but that could ruin the rhythm for the reader (of that story). Another method, is to add such text in parenthesis. Either way, it’s a lot of additional effort by multiple parties. And particularly in business, we strive to distill our messages to be short, sharp and to the point.

My answer of course is a software solution, but one where plain text is still handled and human readable. It’s a simple extensible scheme, and again I name it: Bind (going with a string theme).

The quick [fast speed] brown fox [animal] jumped [causing lift] over [above] the lazy dog [animal]

With this form, any software can present the data. One with understanding of the scheme, can remove the square brackets if there is no facility for an optimized viewing experience. For example:

The quick brown fox jumped over the lazy dog

(Try putting your mouse over the lighter coloured words)

Since the invention of the computer and keyboard, such feats have been possible, but not simply, and certainly not mainstream.

So it would be important to proliferate a Binding text editor which is capable of capturing the intent of the writer.

The benefits of Binding go beyond solving Fray. They add more context for disability accessibility (I would argue Bind is classed as an accessibility feature – for normative people), and depending on how many words are Bound, even assist with language translation.

Imagine Google Translate with a Binding text editor, the translations would be much more accurate. Imagine Google search, where you type “Leave” and hover over the word and select [Paid or unpaid time off work], leaving you less encumbered with irrelevant results.

Such input for search and translation need not wait for people to manually bind historical writing. Natural Language Processing can bear most of the burden and when reviewing results, a human can review the meaning the computer imputed, and edit as needed.

We just need to be able to properly capture our thoughts, and I’m sure we’ll get the hang of it.

Hey, by the way, please add your own narrative ideas for “the quick brown fox jumped over the lazy dog”, what other stories can that sentence tell?

Appendix – Further Draft Specification of Bind:

Trailer MetaData Option:

  • Benefit: the metadata is decoupled visually from the plain text. This makes viewing on systems without support for the Bind metadata still tolerable for users.
  • Format: [PlainText][8x Tabs][JSON Data]
  • Json Schema: { BindVersion: 1, Bindings: […], (IdentifierType: “Snomed”) }
  • Binding Schema: { WordNumber: X, Name: “Z”, Identifier: “Y”, Length: 1}
  • Word Number: Word index, when words are delimited by whitespace and punctuation is trimmed.

Mixed MetaData Option:

  • When multiple preceding words are covered by the Binding, a number of dash indicates how many more words are covered. Bind Text: “John Smith [-Name]” indicates the two words “John Smith” are a Name.
  • The identifiers specified in ontological databases such as Snomed, may be represented with a final dash and then the identifier. Bind Text: “John Smith [-Name-415]” indicates a word definition identifier of 415, which may have a description of “A person’s name”.
  • When a square bracket is intended by the author, output a double square bracket. Bind Text: “John Smith [-Name] [[#123456]]” renders plainly to “John Smith [#123456]”

Music needs Intelligence and hard work

In all things, I believe a persons’ overall intelligence is the first factor which determines their performance. Some of the best sporting athletes find themselves running successful business ventures. The same goes for the best comedians. Of course hard work and training are necessary for any craft that an intelligent person applies themselves to, but good outcomes seldom happen by accident.

Today I stumbled across a YouType clip – Haywyre – Smooth Criminal and concluded that this was one smart guy, and at such a young age! This assumption was further supported by some brief research through other news articles about him. He has done most of the mastering of his albums and wouldn’t be surprised if he produced the YouTube video clip and website on his own too! When such intelligence collides with a focused hard work ethic this is what you get. Of the music articles so far written, I don’t think any of the writers have realized yet that they are writing about a genius just getting started.

His style definitely resonates with me, with his Jazz and Classical roots, but most importantly for me is the percussive expression that drives his compositions. Too many people will be captivated by the improvisation in the melody, but that’s only one layer of his complex compositions. If he’s still working solo, he will need to find good people to collaborate with into the future to reach his full potential. I hope Martin applies himself to other genres of music and other pursuits.

I happen to work in the software development industry, and found that it doesn’t matter how much schooling or experience someone has had, anyone can have their potential capped by their overall intelligence. One’s brain capacity is somewhat determined by genes, diet and early development. Once you have fully matured, there’s little or no ability to increase your brain power. That would be confronting for a lot of people who find themselves eclipsed by giants of thought.

So it’s no wonder why intelligence is seldom a measure of a person these days. Musicians are often praised as being talented for their good music, but that excludes all others: they must have some magical talent to succeed. As with creativity, the truth is less interesting, but very important. We should be pushing young children to develop intelligence, and value intelligence, not Hollywood “talent”. I suspect that valuing intelligence publicly, risks implying lack of intelligence for ineffective musicians (the same applying to other crafts).

Don’t let political politeness take over.

Digital Things

The “Internet of Things” is now well and truly established as a mainstream buzzword. The reason for its success could be explored at length, however this term is becoming overused, just like “Cloud”. The term has come to mean many different things to different people in different situations. “Things” works well to describe technology reaching smaller items, but “Internet” is only a component of a broader field that we can call Digital Things.

This Digital Things revolution is largely driven by the recent accessibility of tools, such as Arduino, Raspberry Bi and more. Miniaturization of computing that stretches even the definition of embedded computing. Millions of people are holding such tools in their hands wondering what to do with them. They all experience unique problems, and we see some amazing ideas emerge from these masses.

In health, the quantified self may eventually see information flow over the internet, but that’s not what all the fuss is about. Rather, it’s about Information from Things. Measuring as much as we can, with new sensors being the enablers of new waves of information. We want to collect this information and analyse it. Connecting these devices to the internet is certainly useful to collect and analyse this information.

Then there are many applications for the Control of Things. Driverless cars are generally not internet connected, neither are vacuum robots, burger building machines, a novel 100k colour pen or many many more things. It would seem the of the term Internet of Things as inspiration limits the possibilities.

In the end, Digital Things is the most suitable term to describe what we are seeing happen today. We are taking things in our lives which normally require manual work, and using embedded electronics to solve problems, whether it be for information or control, the internet is not always necessary.

Lets build some more Digital Things.

Geelong has a clean slate

I hope you’re done. Q&A was your last chance to detox from any doom and gloom you had left.

The loss of jobs, particularly at Ford, is not a pleasant experience for retrenched workers, but there’s no changing the past. The fact is Geelong now has a clean slate to dream big, and driverless electric vehicles is a perfect fit for the future of manufacturing.

On Q&A last night, Richard Marles was spot on, describing the automotive industry as one of our most advanced in supporting technical innovation in Australia. But ironically, the industry together has missed the boat and was always on a trajectory with disaster.

I have been watching the industry, since 2010. I have observed the emerging phenomenon of the electric vehicle and the needful but lack of interest by our local automotive industry.  I have realised any automation is to be embraced despite the unpleasant short-term job losses. And still we’re about to miss a huge opportunity.

The public forum is full of emotion, desperation, finger pointing, and frankly ignorance.

Geelong, we have a clean slate.

Kindly watch this video, https://www.youtube.com/watch?v=CqSDWoAhvLU, it’s all Geelong needs to drop the past and grasp the future, share it with your friends and call up all the politicians you know. It’s been there the whole time, and this vision for Geelong is all we need to forget our sorrows. You won’t understand unless you see the video. We need to act now.

I have covered Electric Vehicles comprehensively in the past, but they’re today’s reality. We need to aim higher. Do Geelong even know anything about driver-less cars?

People are immediately cautious of change, which is why the technology needs to be tested and tested here in Geelong. This will be a great focal point for our retraining efforts. Imagine cheap transport and independence for the elderly and disabled. Cheaper, safer and faster deliveries. Reduced traffic congestion and elimination of traffic lights – no stopping! Cars that drop you off and pick you up will park out of town – what car parking problem? What will we do with all those empty car park spaces in the city? More green plants and al fresco dining?

But most importantly zero road fatalities. If this is the only reason, it’s all we need.

They are legal in California today. What stepping stones will we take to legalise fully driverless cars in Victoria? These massive technology companies will only move next to hospitable markets. Who is talking to Nissan and Tesla about building the next generation of electric driverless vehicles in Geelong? We have been given a clean slate, there are too many exciting opportunities around to waste any more time on self-pity!

Oh and trust me when I say, that’s just the tip of the iceburg – I’m not telling you everything, find out for yourself. Click all the links found in this article for a start, it’s what they’re for.

Hint: There’s more to come from me, including the idea to start a “Manufacturing as a Service” company for Automotive, just like Foxconn does for electronics in China, inviting the Ford/Alcoa workers, their investment, GRIIF investment, outside investors and Tesla. There’s lots more work to do, but it’ll be worth it.

Some more videos you should really watch:


Lets leave Javascript behind

Disclaimer: I am sure Javascript will continue to be supported, and continue even to progress in features and support, regardless of any Managed Web. Some people simply love it, with all the flaws and pitfalls, like a sweet elderly married couple holding onto life for each other.

It’s great what the web industry is doing with ECMAScript, from version 6 we will finally see something resembling classes and modules. But isn’t that something the software industry have had for years? Why do we continue to handicap the web with an inferior language, when there have always been better options? Must we wait another 2-3 years before we get operator overloading in ECMAScript 7?

The .Net framework is a rich standardised framework with an Intermediate Language (IL). The compiler optimisations, toolset and importantly the security model, make it a vibrant and optimised ecosystem which could be leveraged. It could have been leveraged years ago with a bare minimum Mono CLR.

Google Chrome supports native code, however it runs in a separate process and calls to the DOM must be marshalled through inter-process communication methods. This is not ideal. If the native code support was in the same process it would be a good foundation for Mono.

I believe it is possible, perhaps even trivial, to achieve this nirvana of a Managed Web. We just need to take small considered steps to get there, so here’s my plan.

  1. Simple native code in the same process – Javascript is currently executed on the main thread, presumably through the window message pump executing delegates. These delegates can simply forward to managed function delegates. But first we should be able to trigger an alert window through native code which is compiled inside the Google Chrome code base.
  2. Simple mono support – Fire up Mono, provide enough support in a Base Class Library (BCL) for triggering an alert. This time there will be an IL DLL with a class which implements an Interface for start-up.
  3. Fuller API – With the simple milestones above completed, a complete BCL API can be designed and implemented.
  4. Optimisations – For example, enumerating the DOM may be slowed by crossing the Managed/Unmanaged boundary? jQuery-like functions could be implemented in native code and exposed through the BCL.

Along the way, other stacks and browsers could also leverage our work, establishing support for at least Java as well.

Example API:

IStartup

  • void Start(IWindow window) – Called when the applet is first loaded, just like when Javascript is first loaded (For javascript there isn’t an event, it simply starts executing the script from the first line)

IWindow
see http://www.w3schools.com/jsref/obj_window.asp

IDocument
see http://www.w3schools.com/jsref/dom_obj_document.asp

Warm up – Possible disadvantage

Javascript can be interpreted straight away, and there are several levels of optimisation applied only where needed, favouring fast execution time. IL would need to be JIT’d which would be a relatively slow process, but there’s no reason why it cannot be AOT compiled by the web server. Still I see this as the biggest disadvantage that needs to be front of mind.

Other people around the web who want this

http://tirania.org/blog/archive/2012/Sep-06.html

 

Enhanced by Zemanta

University BC

University is becoming increasingly irrelevant for the IT industry.

It’s 3 years of full-time study, yet in a month, a talented 12 year old kid can write an app that makes him a millionaire. Course content is always lagging behind, not for lack of pushing by academics and industry, the bureaucracy of the system drags. With content such as teamtreehouse.com on the up, there is potential for real upset in the IT education market. And without any entrepreneurship support, there is no excitement and potential to build something meaningful from nothing. Increasingly universities will be perceived as the old way, by new students as well as industry looking to hire.

I would like to see cadet-ships for IT, with online content and part-time attendance to training institutions for professional development and assessment. Although even assessment is questionable: Students are not provided access to the internet during assessments, which does not reflect any true-to-life scenario. I value a portfolio of code over grades.

I seek individuals who have experience in Single Page Applications (SPA), Knockout.js, javascript, jquery, Entity Framework,

C#, 2D drawing, graphic art, SQL (Window Functions). Others are looking for Ruby on Rails developers. All of my recent graduates have very limited exposure to any of these.

I could be wrong, but if I am right, institutions who ignore such facts are only going to find out the hard way. I know the IT industry has been reaching out to Universities to help keep them relevant, it’s time for Universities to reach back out to the industry, and relinquish some of their control for the benefit of both students and the industry.

Enhanced by Zemanta

Inverse Templates

Hackathon project – Coming soon….

[Start Brief]
Writing open source software is fun, but to get recognition and feedback you need to finish and promote it. Todd, founder of Alivate, has completed most of the initial parts of a new open source project “Inverse Templates”, including most of the content below, and will work with this week’s hackathon group to publish it as an isolated open source project, and NuGet package.

Skills to learn: Code Templating, Code Repositories, NuGet Packages, Lambda, Text Parsing.
Who: Anyone from High School and up is encouraged to come.

We will also be able to discuss future hackathon topics and schedule. Don’t forget to invite all of your hacker friends!

Yes, there will be Coke and Pizza, donated by Alivate.
[End Brief]

The Problem

Many template engines re-invent the wheel. Supporting looping logic, sub-template and many other features. Any control code is also awkward, and extensive use makes template files look confusing to first-time-users.

So why have yet another template engine, when instead you can simply leverage the coding language of your choice, and your skills and experience hard fought?

The Solution

Normal template engines have output content (HTML for example) as the first class citizen, with variables and control code being second class. Inverse Template systems are about reversing this. By using the block commenting feature, of at least C-Like languages, Inverse Template systems let you leverage the full power of your programming language.

At the moment we only have a library for C# Inverse Templates. (Search for the NuGet Package, or Download and reference the latest stable DLL)

Need a loop? Then use a for, foreach, while, and more.
Sub-templating? Call a function, whether it’s in the same code-file, in another object, static or something more exotic.

Introductory Examples

Example T4:

Introductions:
<# foreach (var Person in People) { #>
Hello <#= Person.Name #>, great to iterate you!
<# } #>

Example Inverse Template:

/*Introductions:*/
foreach (var Person in People) {
/*
Hello */w(Person.Name);/*, great to iterate you!*/
}

As you can see, we have a function named w, which simply writes to the output file. More functions are defined for tabbing, and being an Inverse Template you can inherit the InverseTemplate object and extend as you need! These functions are named with a single character, so they aren’t too imposing.

Pre-Processing
As with T4 pre-processing, Inverse Template files are also pre-processed, converting comment blocks into code, then saved as a new file which can be compiled and debugged. Pre-processing as opposed to interpreted templates are required, as we rely on the compiler to compile the control code. Furthermore, there are performance benefits to pre-processed (and compiled) templates, as opposed to interpreted.

Example pre-processed Inverse Template:

l("Introductions:");
foreach (var Person in People) {
  n("");
  t("Hello ");w(Person.Name);
}

Function l, will output any tabbing, then content, then line-ending
Function n, will output the content followed by line-ending
Function t, will prefix any tabbing followed by content

The pre-processor will find and process all files in a given folder hierarchy ending with “.ct.cs”. The pre-processor is an external console application, so that it will even work with Express editions of Visual Studio.

You should:

  • Put all of your Definitions into folder .\InverseTemplates\Definitions\, sub-folders are ok
  • Actively exclude and then re-include the generated .\InverseTemplates\Processed\, folder after pre-processing
  • Exclude the Definitions folder before you compile/run your project

Not the answer to all your problems

I’m not claiming the Inverse Templates are the ultimate solution. They’re simply not. If you have a content heavy templates with no control code and minimal variable merging, then perhaps you want to just use T4.

Also, you may find that you’re more comfortable using all of the InverseTemplate functions directly {l,n,w,t}, instead of using comment blocks. In some cases this can look more visually appealing, and then you can bypass the pre-processing step. This could be particularly true of templates where you have lots of control code and minimal content.

But then again, keep in mind that your code-editor will be able to display a different colour for comment blocks. And perhaps in the future your code-editor may support InverseTemplates using a different syntax highlighter inside your comment blocks.

For a lot of work I do, I’ll be using Inverse Templates. I will have the full power of the C# programming language, and won’t need to learn the syntax of another template engine.

I’m even thinking of using it as a dynamic rendering engine for web, but that’s more of a curiosity than anything.

Advanced Example – Difference between Function, Generate and FactoryGet

class TemplateA : InverseTemplate {
  public override Generate() {
    /*This will be output first, no line-break here.*/
    FunctionC(); //A simple function call, I suggest using these most often, mainly to simplify your cohesive template, when function re-use is unlikely.
    Generate(); //This is useful when there is some function re-use, or perhaps you want to contain your generation into specific files in a particular structure
    IMySpecial s = FactoryGet("TemplateD"); //This is useful for more advanced situations which require either a search by interface implementation, with optional selection of a specific implementation by class name.
    s.SpecificFunction("third");
  }
  private void FunctionC() {
    /*
    After a line-break, the is now the second line, with a line-break.
    */
  }
}
class TemplateB : InverseTemplate {
  public override Generate() {
    /*This will be the third line.*/
  }
}
interface IMySpecial
{
  void SpecificFunction(string SpecificParameter);
}
class TemplateD : InverseTemplate, IMySpecial
{
  public SpecificFunction(string SpecificParameter) {
    /* This will follow on from the the */w(SpecificParameter);/*line.
    */
  }
}
class TemplateF : InverseTemplate, IMySpecial
{
  //Just to illustrate that there could be multiple classes implementing the specialised interface
}

Advanced – Indenting

All indent is handled as spaces, and is tracked using a stack structure.

pushIndent(Amount) will increase the indent by the amount you specify, if no parameter is specified, the default is 4 spaces.
popIndent will pop the amount of indent on the stack, the last amount pushed.
withIndent(Amount, Action) will increase the indent only for the duration of the specified action.

Example:

withIndent(8, () => {
  /*This will be indented by 8 spaces.
  And so will this, on the next line.
  I recommend you only use this when calling a function.*/
});
/*This will not be indented.*/
/*Within a single function you should
    control your indent manually with spaces.*/
if (1 == 1) {
/*
    it will be easier to see compared to calls to any of the indent functions {pushIndent, withIndent, etc..}*/
  if (2 == 2) {
/*
    just keep your open-comment-block marker anchored in-line with the rest*/
  }
}

These are all the base strategies that I currently use across my Inverse Templates. I also inherit InverseTemplate and make use of the DataContext, but you’ll have to wait for another time before I explain that in more detail.

Enhanced by Zemanta

IL to SQL

There are some cases where one needs to perform some more complex processing, necessitating Application-side processing OR custom SQL commands for better performance. For example, splitting one column of comma delimited data into 3 other columns:

public virtual void DoEntityXSplit()
{
  var NeedSplitting = db.EntityXs.Where(x => !x.Splitted1.HasValue);
  foreach (var item in NeedSplitting)
  {
    string[] split = item.DelimitedField.Split(',');
    item.Splitted1 = split[0];
    item.Splitted2 = split[1];
    item.Splitted3 = split[2];
    item.Save(); //THIS...
  }
  db.SaveChanges(); //OR this
}

When you run DoEntityXSplit, the unoptimised code may run. However if supported, automatic optimisation is possible derived from the IL (Instruction Language – aka. .Net bytecode) body of the method, when:
i) The ORM (Object Relational Modelling – eg. nHibernate / EntityFramework) supports some sort of “ILtoSQL” compilation at all; and
ii) The function doesn’t contain any unsupported patterns or references, then the raw SQL may be run. This could include the dynamic creation of a stored procedure even for even faster operation.

public override void DoEntityXSplit()
{
  //This is pseudo SQL code
  db.RunQuery("
    declare cursor @NeedSplitting as (
      select ID, DelimitedField
      from EntityXs
      where Splitted1 is null
    );

    open @NeedSplitting;
    fetch next from @NeedSplitting into @ID, @DelimitedField
    while (@StillmoreRecords)
    begin
      @Splitted1 = fn_CSharpSplit(@DelimitedField, ',', 0)
      @Splitted2 = fn_CSharpSplit(@DelimitedField, ',', 1)
      @Splitted3 = fn_CSharpSplit(@DelimitedField, ',', 2)

      update EntityX
      set Splitted1 = @Splitted1,
      Splitted2 = @Splitted2,
      Splitted3 = @Splitted3
      where ID = @ID

      fetch next from @NeedSplitting into @DelimitedField
    end
  ");
}

of course this could also be compiled to

override void DoEntityXSplit()
{
  //This is pseudo SQL code
  db.RunQuery("
    update EntityX
    set Splitted1 = fn_CSharpSplit(@DelimitedField, ',', 0),
    Splitted2 = fn_CSharpSplit(@DelimitedField, ',', 1)
    Splitted3 = fn_CSharpSplit(@DelimitedField, ',', 2)
    where Splitted1 is null
  ");
}

but I wouldn’t expect that from version 1 or would I?

Regardless, one should treat IL as source code for a compiler which has optimisations for T-SQL output. The ORM mappings would need to be read to resolve IL properties/fields to SQL fields. It may sounds crazy, but it’s definitely achievable and this project looks like a perfect fit for such a feat.

Where will BLToolKit be in 10 years? I believe ILtoSQL should be a big part of that future picture.

If I get time, I’m keen to have a go, it should be built as a standalone dll which an ORM can leverage. Who knows maybe EF will pick this up?

Enhanced by Zemanta

Poppler for Windows

I have been using the Poppler library for some time, over a series of various projects. It’s an open source set of libraries and command line tools, very useful for dealing with PDF files. Poppler is targeted primarily for the Linux environment, but the developers have included Windows support as well in the source code. Getting the executables (exe) and/or dlls for the latest version however is very difficult on Windows. So after years of pain, I jumped on oDesk and contracted Ilya Kitaev, to both compile with Microsoft Visual Studio, and also prepare automated tools for easy compiling in the future.

So now, you can run the following utilities from Windows!

  • PDFToText – Extract all the text from PDF document. I suggest you use the -Layout option for getting the content in the right order.
  • PDFToHTML – Which I use with the -xml option to get an XML file listing all of the text segments’ text, position and size, very handy for processing in C#
  • PDFToCairo – For exporting to images types, including SVG!
  • Many more smaller utilities

Download

Latest binary : poppler-0.47_x86.7z

Older binaries:
poppler-0.45_x86.7z
poppler-0.44_x86
poppler-0.42.0_x86.7z
poppler-0.41.0_x86.7z
poppler-0.40.0_x86.7z
poppler-0.37_x86.7z
poppler-0.36.7z
poppler-0.35.0.7z
poppler-0.34.0.7z
poppler-0.33.0.7z
poppler-0.26.4.7z
poppler-0.26.3.7z
poppler-0.26.1_x86
poppler-0.26.1
poppler-0.22.0
poppler-0.18.1

Man-made Gaia

see: http://www.digitaltrends.com/cool-tech/nasa-turns-the-world-into-music/

I’m sure the NASA scientist knew how popular the story would be when he decided to artifically translate the measurements of a scientific experiement into audiable sounds.

The journalist attempts to imply that the audio is a direct feed with no translation. “The fact that the data is sampled at the same rate as a CD isn’t entirely accidental” Those in the audio production business would understand that sampling rate has no resemblance to audible sound, but rather the frequency range of those samples does. And given the samples were taken over a 60 day period, we can be sure that the frequencies were very low and sped up.

I’m not terribly familiar with the EMFISIS project, but from what I hear, the audio sounds like the detection of bursts of charges being expelled from the Sun (affected by the Earths magnetic shield of course). That is, a quantity moving from a low resonance to a higher one. But of course this quantity had to be artificially scaled to be audible  and it’s this scaling which produces the sound of birds in a rainforest. If you listen closely the sound also resembles chipmonks, which is what happens when you record your even your own voice and speed it up.

When scaling the audio, the engineer chose the frequency range carefully to make it sound like the Earth was at harmony with the rainforests. Quite a dishonest representation. The original frequencies are much lower, and applying them to the lowest audible frequencies would have been more justifyable, however this would have resulted in the sound of a group of fat birds singing. Not the effect sought after.

Of course, it may all be in good spirit and fun. But NASA is funded by tax payers in the US, this isn’t science, it is pandering to Pagan Religion, simply read all the comments left on the sound clip. They’re on a spiritual high.

Sasha Burden – Pre-Dictator of Australia

A glimpse into the self-absorbed world of the totalitarian journalist. It happens only rarely, they try to keep their ugly self-moralising core hidden from view, but we occasionally see it rare up it’s repugnant head. Andrew Bolt blogged about an intern, an intern who wrote about her experiences in such a tone that uncovers a sterotypical totalitarian (dictator), who would make everyone think like her. She’s right, everyone else is wrong, she is the sole purveyor of good. Sounds kinda religious doesn’t it?

I was so confronted and appalled by her writing, I couldn’t help but write what I really thought. In so many workplaces and homes people talk about issues, they are free from ears of political correctness, and as a result speak much more honestly and plainly about how they feel and what they’re thinking. Burdens propaganda was a trigger for me, the gap between reality and the political class is so immense and too many people are so afraid to publicly oppose the unrelenting tide of so-called progressivism. Would it hurt their business? Would they lose friends? These people describe a world where opposing views are in minority, despite the polling. The late John Linton from Exetel is one source of inspiration, he opened many a can of worms. Let’s see what really happens.

An obese topless man on a motorcycle. Original...
An obese topless man on a motorcycle. Original caption: “The plague of anorexia must be overcome” (Photo credit: Wikipedia)

So Burden was allowed to sit in at editorial meetings.

Comments in the news conference included
“Of course he’s fat, look at what he eats” and
“How does someone let that happen?”
Was Burden born under a rock? Probably the inner suburbs of Melbourne, close enough. If she entered any real-life work place she would find such comments embedded in Australian culture. In fact, it’s typical of human nature, having a laugh at someone’s expense. It’s honest however, obvious and most importantly, free speech. Would Burden have such speech suppressed in the workplace?
It’s a bit like calling a white skinned person a, “whitefella”, they have white skin! But Burden doesn’t appear to want people telling the truth now, oh no, she would have people cower in the dark ages, with her seasoned judgement marking heresies in the workplace.

Her suppression of the truth continues:

This image was selected as a picture of the we...
This image was selected as a picture of the week on the Czech Wikipedia for th week, 2007. (Photo credit: Wikipedia)
…a female journalist bizarrely
insisted that an article debating the benefits
of chocolate should be written by a female: “A
woman needs to say chocolate is good.”
Burden, it’s a well known fact that women openly adore chocolate (they’ve come out of the cupboard), and respond well to other women’s opinions. It’s a well known fact that Woman are physically and even mentally different to Men. Real people have no problems with this, and embrace it. Burden would rather dictate the opposite to society.
She is clearly quite easily offended, by the mundane
 …a potential story on a trans person with him.
His remarks included, “He? She? It?” “There
has to be a photo of it” and “You should put
the heading—‘My Life As A She-Man!’ or
‘G-Boy.’” No one in the newsroom reacted.
That's So Gay
That’s So Gay (Photo credit: Wikipedia)

Of course, I expected her to pass holy judgement on an opinion of gay marriage

…moved from transphobia to homophobia on the
eighth day, commenting on a recent piece
on gay marriage. “Why are they [the gay
community] making such a fuss? It’s been
this way for millennia, why change now?”
How dare an individual have an opinion! He should relinquish his persona and join the (minority) collective! This is the poster issue of people like Burden, disinterested in how people feel, working their hardest to engineer opinions over decades with unrelenting pontificating. It’s the epitome of self-righteousness, her ego towering above the peasant minds of society.
She readily misinterprets kindness
Men were also continuously and unnecessarily
sexist, waiting for me to walk through doors
and leave the elevator before them
It’s quite possible this workplace holds doors for females, but who cares? Most likely, men also hold doors for males too, and it’s her narrow minded philosophy which blinded her to the truth. Generally, I’ve found females not to be as logical as males, often more concerned with a strongly held opinion, quite emotional, different from males. It’s ironic, that it’s her type, who insist we accept people for who they are and not try to change them, yet I see women being bullied into taking a career, and on the other hand hear of women regretting not having children when they were younger, wish they could have mothered their children. It is frustrating at times to see such busybodies, nosey about everyone else’s business.
Compasses
Compasses (Photo credit: Wikipedia)

So why would someone be like Sasha Burden. It’s difficult from my vantage point to guess, I don’t know her personally, but I can assume. I don’t need to speculate on her past, her upbringing or her external influences. I can guess from how she stands today, and guess her aspirations. She wants to be seen as a purely moral, taking up the cause of redefining the concept of morality with her own values. She does not like testing her assumptions, and once she has taken a side on an issue she will fight despite the facts, she will never concede defeat. I doubt such a person can ever change, but I sure hope I never become so arrogant. I hope she fails her inquisition, and never fully imposes her incoherent morality on Australians. I hope more people can stand up against people like her, it’s hard because she purports to stand for morality, she has the high ground.

Enhanced by Zemanta

Atlas: A brilliant new composition by Daniel Johns

A couple of days ago, I saw a brief clip on the T.V of the new Qantas composition by Daniel Johns. It was an small glimpse, and I wasn’t immediately impressed.

Cover of "Freak Show"
Cover of Freak Show

Having grown up, listening to Silverchair songs from albums Freak Show and Neon Ballroom, I have never fully appreciated Daniel Johns Pop transition and experimentation. When Silverchair started producing their own albums themselves, their music drifted from its original grunge roots and was allowed much more exploration, particularly by the most influential member, Daniel Johns.

From Diorama, it was evident that Daniel Johns desired a more palatable Pop sound, highly produced but with the clear promise, that it was still Daniel Johns true voice with no artificial pitch changes. The transition has truly been painful as a customer, however I still thoroughly enjoy Johns’ newer stuff, even though I would normally avoid Pop where I can.

It was in the album Freak Show, that we first got our first introduction to Johns affection for strings. The scores highlight his music style, with modal and sometimes atonal chord changes which blended well with the chaotic grunge, now mellowing.

Daniel Johns
Cover of Daniel Johns

See: http://www.theage.com.au/travel/travel-news/daniel-johns-gets-on-board-as-qantas-replaces-iconic-i-still-call-australia-home-anthem-20120720-22dxx.html

After hearing the piece in its entirety, I found the gems of brilliance which are characteristic of Johns. Daniel speaks of wanting something that “sounded international” in sound, and I think he has nailed it.

Intro Phrase

The opening riff, which has the timbre of solitary warm guitar harmonics, give the feeling of time, which instantly associates Qantas with travel and by extension that international sound desired. The riffs have a faint chordal melody. When coupled with visuals of people looking to the sky, I imagine Qantas in the sky, not necessarily an aircraft. A piano is introduced, but is interestingly suppressed (most likely in post-production) playing chords {E, Am, C} on the first beat accented by a tuned drum.

Voice Chorus

After a couple of repeats, this phrase then cuts straight into the vocal chorus complete with harmonies and strings, crash of symbols and a couple of bangs on a deep suppressed drum, with all previous instrumentation abandoned. I found this transition to be quite abrupt, there may have been a softer way to transition, but it sounds right. Such changes are one of my favorite qualities of Silverchair songs, they can change quite abruptly through the song, (making them much more interesting than a simplistic bridge found in typical music of repetitive progression), but as long as they come back to the original theme, the song feels complete and resolved.

I’m glad Daniel used his voice, he has a wide vocal range, but uses falsetto, (and generous reverb) to give a spatial heavenly feel. The key is of the vocal chorus is complimentary to the opening chords played on the piano, and the melody loosely follows an arpeggio form, and descending patterns against contrary ascending patters from the violins. It sounds like some of the other voices are multi-tracking of Daniels voice, but other voices are dutifully added.

Tying back to Temporal

The piano and guitar are returned, with simple chords on the piano (which no longer sound suppressed – probably because the drum is no longer accenting the chords). The guitar is playing a rhythm of a single note per bar, the rhythm fits in with the initial opening riff, sounding temporal. At concerts Daniel often used many effects pedals, and I wouldn’t be surprised if he achieved this sound manually. Keyboard is then added in a high register further strengthening the international temporal effect. It’s now that the cut over to the chorus sounds tied.

ACO Virtual Orchestra
ACO Virtual Orchestra (Photo credit: .M.)

The Completion

This continues with some key lifts and drops, strong spirited builds, some additional instrumentation including brass. Very much rudimentary, but of course effective and required to finish off the song. The final phrase reverts to the original instrumentation of the introduction.

Orchestra

It was notable that the Australian Chamber Orchestra was used for strings, as Silverchair have used many international players and orchestras in the past. The ACO were obviously chosen for being Australian, and they were fantastic. I’ve heard them in the past, having purchased a year subscription to their concerts, and found that I prefer the ACO accompanying Daniel Johns music as opposed to unfamiliar (to me) classical works – although there is nothing like being there, no sound system can compare.

Lyrics

The absence of lyrics is strange. Perhaps he was directed not to add lyrics and suitable vocal melody. There is real potential in this piece to write a lyrical song, one which can actually stand well beside “I still call Australia home’. I might attempt to cover this song one day and add some lyrics to demonstrate the potential.

Conclusion

The quality of this work speaks of a hard working perfectionist. From instrumentation to melody, it seems that Daniel Johns works best when he is given some external direction, something to focus on. It must be difficult for Daniel to have this piece compared to “I still call Australia home’, a very patriotic and accessible song. In this regard, there is no comparison, it has history and culture and resounds with every Australian. This is a brilliant piece and will represent Australia well alongside the classic. I hope Johns continues to accept external influences, collaborating with a broader community of artists and clients, and I can’t wait to hear the outcomes of his many projects to come.

Enhanced by Zemanta

We will never meet a space-exploring or pillaging Alien

The thought of Aliens capture the imagination, countless worlds beyond our own with advanced civilizations. But I have strong suspicions that we will never meet an Alien. I’ve always had my doubts, and then I read an article recently which uses very sound reasoning to preclude their existence (I don’t have the reference for the specific one).

DON’T EXIST

It basically goes:

  1. The Universe is roughly 13 billion years old – plenty of time for Aliens to develop technology
  2. The Universe is gigantic – plenty of places for various Aliens to develop technology
  3. We would want to find other Aliens – Other Aliens would want to also look for other life
  4. Why haven’t they found us? Why haven’t we found them?
  5. Because they don’t exist
When we first started surveying space and searching for Aliens we would have found them, they, as we do, would have been transmitting signals indicating intelligence.
NEVER MEET
But there is also another, less compelling, reason. The Universe appears to be expanding, and accelerating that expansion. Unless worm-hole traversal is found to be practically feasible, the whole meeting part will never happen.
OTHER REASONS
Here’s some more links to other blogs and articles I found, which also add some more information and other reasons which logically prove that Aliens don’t exist:
I guess, that one or even several logical reasons cannot prove absolutely that Aliens do not exist, we can only be 99.9% or more confident for example. Unless we search all the cosmos and conclude that none exist, can it be an absolute fact. We could have an Alien turn up tomorrow, and explain they have searched the Universe and only just recently found us, and that it’s only them and us and that their home world is hidden behind another galaxy or nebula or something. So logic alone is not definitive, but it is certainly a good guide if the logic itself is not disproven.
Take Fermat’s Last Theorem for example, it was proven, “358 years after it was conjectured”. There were an infinite amount of solutions to the problem, and so an exhaustive evaluation was not practical, a mathematical verification was required. Many believed it to be true of course, but Mathematics being a science, required proof.
So unless we can prove that Aliens don’t exist with scientific observation, and not just with probability, one cannot say with authority that Aliens don’t exist, but at the same time, one definately cannot believe that Aliens do exist without significant proof.

Windowing functions – Who Corrupted SQL?

I hate writing sub-queries, but I seem to hate windowing functions even more! Take the following:

select
PR.ProfileName,
(select max(Created) from Photos P where P.ProfileID = PR.ID) as LastPhotoDate
from Profiles PR

In this example, I want to list all Profile names, and also include a statistic of the most recent uploaded photo. It’s quite easy and looks a little bloated, but compared to windowing functions, it is slower. Let’s have a look at the more performant alternative:

select
PR.ProfileName,
max(Created) OVER (PARTITION BY PR.ID) as LastPhotoDate
from Profiles PR
join Photos P
on P.ProfileID = PR.ID

That’s actually quite clear (if you are used to using windowing functions) and performs better. But it’s still not ideal, coders now need to learn about OVER and PARTITION just to do something seemingly trivial. SQL has let us down. It looks like someone who creates RDBMS’s told the SQL comittee to add windowing functions to the SQL standard, it’s not user friendly at all, computers are supposed to do the hard work for us!

It should look like this:

select
PR.ProfileName,
max(Created)
from Photos P
join Profiles PR
on PR.ID= P.ProfileID
Group By PR.ID --or Group By PR.ProfileName

I don’t see any reason why an RDBMS cannot make this work. I know that if a person gave me this instruction and I had a database, I would have no trouble. Of course, if different partitioning is required within the query, then there is the option for windowing functions, but for the stock standard challenges, keep the SQL simple!

Now what happens when you get a more difficult situation? What if you want to return the most recently uploaded photo (or at least the ID of the photo)?

--Get each profiles' most recent photo
select
PR.ProfileName,
P.PhotoFileName,
P.PhotoBlob
from Photos P
join Profiles PR
on PR.ID= P.ProfileID
join (
select ProfileID, max(Created) as Created
from Photos
group by ProfileID
) X
on X.ProfileID = P.ProfileID
and X.Created = P.Created

It works but it’s awkward and has potential for performance problems. From my limited experience with windowing functions and short search on the web, I couldn’t find a windowing function solution. But again, there’s no reason an RDBMS can’t make it easy for us, and again the SQL language should make it easy for us!

Why can’t the SQL standards group innovate? Something like this:

select
ProfileName,
P.PhotoFileName,
P.PhotoBlob
from Photos P
join Profiles PR
on PR.ID= P.ProfileID
group by ProfileID
being max(Created) --reads: being a record which has the maximum created field value

And leave it to the RDBMS to decide how to make it work? In procedural coding with a set, while you are searching for a maximum value you can also store the entity which has that maximum. There’s no reason this can’t work.

It seems the limitation is the SQL standardisation body. I guess someone could always implement a work around, create a plugin for opensource SQL query tools, as well as opensource functions to convert SQL+ [with such abilities as introduced above] to SQL.

(By the way I have by no means completely thought out all the above, but I hope it describes the spirit of my frustrations and of the possible solution – I hope some RDBMS experts can comment on this dilemma)

Mining Space

It’s quite an aspirational idea – to even look at mining asteroids in space. It may feel like it’s something unreachable, something that’s always going to be put off to the future. But the creation of a new company Planetary Resources is real, with financial backers and with a significant amount of money behind them. We’re currently in transition. Government, and particularly the U.S. government is minimizing its operational capacity for space missions, while the commercial sector is being encouraged and growing. For example, Sir Richard Branson’s, Virgin Galactic, as well as other organisations are working toward real affordable (if you’re rich..) space tourism and by extension commoditisation of space access in general, bringing down prices and showing investors that space isn’t just for science anymore, you can make a profit.

I recently read a pessimistic article, one where the break-even price for space mining is in the hundreds of millions of dollars for a given mineral. One needs to be realistic, however in this article, I think the author is being way too dismissive. You see, there are many concepts in the pipeline which could significantly reduce the cost of earth-space transit. My most favored is the space elevator, where you don’t need a rocket to reach kilometers above the earth (although you would likely still need some sort of propulsion to accelerate to hold in orbit).

But as well as being across technology, a critic needs to also be open to other ideas. For example, why bring the minerals back to Earth? Why not attempt to create an extra-terrestrial market for the minerals? It may well cost much more to launch a large bulk of materials into orbit, than to extract materials from an asteroid (in the future). With space factories building cities in space.

Of course, I still think space mining is hopeful at best, let’s balance despair with hopeful ideas.

Enhanced by Zemanta

SQL-like like in C#

Sometimes you need to build dynamic LINQ queries, and that’s when the Dynamic Query Library (download) comes in handy. With this library you can build a where clause using BOTH SQL and C# syntax. Except for one annoying problem. Like isn’t supported.

When using pure LINQ to build a static query, you can use SqlMethods.Like. But you will find that this only works when querying a SQL dataset. It doesn’t work for local collections – there’s no C# implementation.

My Solution

So I mocked up a quick and dirty like method which would only support a single % wildcard, no escape characters and no _ placeholder. It did the job, but with so many people asking for a solution which mimics like, I thought I’d make one myself and publish it Public Domain-like.

It features:

  • Wildcard is fully supported
  • Placeholder is fully supported
  • Escape characters are fully supported
  • Replaceable tokens – you can change the wildcard (%), placeholder (_) and escape (!) tokens, when you call the function
  • Unit Tested

Downloads:

Adding like support to the Dynamic Query Library – Dynamic.cs

I also modified the Dynamic Query Library, to support like statements, leveraging the new function. Here are the steps required to add support yourself:

1. Add the Like value into the ExpressionParser.TokenID enum

            DoubleBar,
            Like
        }

2. Add the token.id == TokenId.Like clause as shown below into ExpressionParser.ParseComparison()

        Expression ParseComparison() {
            Expression left = ParseAdditive();
            while (token.id == TokenId.Equal || token.id == TokenId.DoubleEqual ||
                token.id == TokenId.ExclamationEqual || token.id == TokenId.LessGreater ||
                token.id == TokenId.GreaterThan || token.id == TokenId.GreaterThanEqual ||
                token.id == TokenId.LessThan || token.id == TokenId.LessThanEqual ||
                token.id == TokenId.Like) {

3. Add the TokenID.Like case as shown below into the switch found at the bottom of the ExpressionParser.ParseComparison() function

                    case TokenId.LessThanEqual:
                        left = GenerateLessThanEqual(left, right);
                        break;
                    case TokenId.Like:
                        left = GenerateLike(left, right);
                        break;
                }

4. Add the following inside the ExpressionParser class (the SQLMethods class need to be accessible, referenced library, or copied source code, using for appropriate namespace)

        Expression GenerateLike(Expression left, Expression right)
        {
            if (left.Type != typeof(string))
                throw new Exception("Only strings supported by like operand");

            return IsLike(left, right);
        }

        static MethodInfo IsLikeMethodInfo = null;
        static Expression IsLike(Expression left, Expression right)
        {
            if (IsLikeMethodInfo == null)
                IsLikeMethodInfo = typeof(SQLMethods).GetMethod("EvaluateIsLike", new Type[] { typeof(string), typeof(string) });
            return Expression.Call(IsLikeMethodInfo, left, right);
        }

5. Change the start of the default switch option according to the code shown in ExpressionParser.NextToken()

                default:
                    if (Char.IsLetter(ch) || ch == '@' || ch == '_') {
                        do {
                            NextChar();
                        } while (Char.IsLetterOrDigit(ch) || ch == '_');

                        string checktext = text.Substring(tokenPos, textPos - tokenPos).ToLower();
                        if (checktext == "like")
                            t = TokenId.Like;
                        else
                            t = TokenId.Identifier;
                        break;
                    }

Example

I use this in my own business system, but I preprocess the LIKE rule, as I have quite a few “AI” rules for bank transaction matching. (You can use like statements directly)

There are many ways to cache, here is how I cache a predicate, looping over the set of AIRules in my DB:

RuleCache[i].PreProcessedPredicate = DynamicQueryable.PreProcessPredicate<vwBankTransaction>(RuleCache[i].Filter); //Change the textbased predicate into a LambdaExpression

And then here is how I use it, looping over the array of cached rules:

bool MatchesRule = DynamicQueryable.Where(x.AsQueryable(), RuleCache[i].PreProcessedPredicate).Any(); //Run the rule

Where `x` is a generic list (not a db query), containing the one record I am checking. (Yes, it would be possible to loop over a larger set [of bank transactions], but I haven’t got around to such a performance improvement in my system – I haven’t noticed any performance issues – it’s not broken).

kick it on DotNetKicks.com

Introducing OurNet – A community project

I’ve kept it under wraps for a while, but hope to make this a more public project. This lowers any prospects of me making any money from it, but does make it more likely that I will see it happen.

While campaigning and thinking laterally on the issues and technologies of the NBN. I devised some interesting aspects configurations of wireless and networking devices, which could create a very fast internet for very little cost.

Such aspects alone provide little improvement on their own, but together can form an innovative commercialisable product.

Aspect 1 – Directional links are fast and low noise

Nothing very new here. They are used for backhaul in many situations, in industry, education, government and commercially. Use a directional antenna and you can focus all the electromagnetic radiation. On its own this cannot create a 10Gbps internet system to each home for a commercialisable price.

Aspect 2 – Short directional links can be made very fast

Lots of research in this domain at the moment. Think of the new BlueTooth standard, WiGig, UWB and others, all trying to help reduce wires in the home and simplify media connectivity. If rather than connecting house to local aggregation point, we connected house to house, a short link technology could be employed to create links in the order of 10Gbps between houses. But on it’s own this does not create a low latency network. With all those houses latency would add up.

Aspect 3 – Mesh network node hop latency can be driven to practically 0

When you think wireless mesh, you think of WiFi systems. Don’t. I have devised two methods for low latency switching across a mesh. Both involve routing once to establish a link (or a couple of links).

The first establishes a path, by sending an establishment packet with all the routing information, which is popped on each hop, also containing the pathid. Then subsequent data packets include the pathid to be switched (not routed) to the destination. The first method requires buffers.

The second establishes a path, similar to the first method, excepting that rather than a pathid, the node reserves timeslices, at which point the correct switching will occur, a bit like a train track. This one can potentially waste some bandwidth, particularly with guard intervals, however the first method can supplement this to send packets in even the guard intervals, and unreserved timeslices. The second method does not require buffers.

The second method is best for reserving static bandwidth, such as for a phone or video call, or for a baseload of internet connectivity, so HTTP requests are very responsive. The first is for the bursting above the minimum baseload of connectivity.

There is a third (and fourth and fifth – multi-cast and broadcast) method which can be for very large chunks of data which can simply be sent with the route, or routed on demand. Such a method might be completely eliminated though, with too much overlap with the first.

This can be implemented initially with an FPGA, and other components, such as a GPS module for accurate timing (or kalman filtered multiple quartz system). And eventually mass produced as an ASIC solution.

Aspect 4 – Total mesh bandwidth can be leveraged with distributed content

If every house has 4 links of 10Gbps, then you can see how quickly the total bandwidth of the mesh would increase. However this total bandwidth would be largely untapped if all traffic had to flow to a localised point of presence (PoP). That would be the potential bottleneck.

However one could very easily learn from P2P technologies. And the gateway at the PoP could act as a tracker for distributed content across the mesh. Each node could be capable of storing terabytes of data very cheaply. So when you go to look-up The Muppets – Bohemian Rhapsody, you start getting the content stream from YouTube, but then it’s cut after you have established a link to the content on the mesh.

Problems

There are some problems in this grand plan to work through, but it will only be a matter of time for such solutions to be found.

The first is finding the perfect short links. Research thus far has not been on developing a link specifically for our system, at the moment we would be re-purposing a link to suit our needs, which is completely viable. However to gain the best performance, one would need to initiate specific research.

The second is installation, we need to find the best form factor and installation method for each node onto houses. I anticipate that a cohesive node is the best option, all components, including the radios and antenna on the same board. Why? Because every-time you try to distribute, you need to go from our native 64bit communication paths to a Ethernet or RF over SMA etc… Gaining latency and losing speed and/or gaining noise. However, by having everything together, you increase the distance between nodes. One possible compromise could be to use waveform conduit to carry the various links closer to the edge of the house, capped with plastic to prevent spiders getting in.

The final, is a subjective problem. That is of power consumption. However this is a mute point for various reasons. For one, the node can be designed for low power consumption. Secondly, the link technologies need not be high power devices. I’ve seen some near-IR transmitters just come out (not commercially yet), which cost about a dollar each (for the whole lot), and can reach speeds of 10Gbps, and are very low power. Finally, with the falling cost of solar panels, one could incorporate them into the node (with battery), to lower installation costs.

Opportunities / Disruptions

A new paradigm in internet:

OurNet is the current name for a reason. People own their own mobile phone and pay it off on plan. With OurNet you can own your own node, and pay it off on a plan. But in addition, the node your own becomes a part of the actual network as well as the connection to it. It forms the backhaul AND the lastmile! Hence it being everyone’s net. Such a shift in ownership is sure to have some sort of impact on consumers, a new way of accessing the internet and a new paradigm of belonging/contributing.

And of course OurNet achieves fixed-line like infrastructure with wireless technology. Individuals, Businesses and datacentres could have four (and even more) redundant links, with each link able to have multiple redundant paths. This is not possible with the current (domestic) hierarchical model of the internet, where one needs to subscribe to multiple vendors to achieve redundancy costing $$$. You could reliably host your own website from your home or business, to the world!

Faster Mobile communications

With a node on every house, mobile communication can become very fast and have very low contention. Each node can be equipped with an omni-directional antenna for short range communication. In addition a beam-form directional antenna or MIMS turnable antenna can supplement or replace the omni-directional antenna allowing for very high speed, low noise links to mobile communicators.

High Precision Positioning

OurNet is made up of fixed position nodes with super high resolution timing. If this is leveraged, GPS systems can be enhanced to the millimeter opening up further opportunities for driverless cars and the like (they already work well with visual object detection and lidar, but an additional reference point can’t hurt).

Enhanced by Zemanta

Carbon Tax and EVs

Just a quick one…

There are many reasons Electronic Vehicles (EV) are becoming more popular. Improvements in technology for greater range, production by bigger companies lowering prices. But there is one major driving factor, running cost.

The high and at times, rising price of fossil fuels makes consumers look elsewhere, and think outside their usual comfort zone. Electricity is cheap. Because of this, technology is being researched and major car companies are adjusting.

So what will happen when a Carbon Tax comes into Australia, one which doesn’t increase petrol prices, yet does electricity. Now, I don’t subscribe to the Global Warming scare, I’ve personally read a few papers and read through plenty of commentary to understand that this is a hoax.

However, it seems a contradiction to create regulation which will adversely affect the market, making consumers less likely to choose a “greener” option. (In my opinion EVs are not just cheaper to fuel, but also a lot cheaper to maintain – no engine oil, tuning, timing belt, radiator, etc…).

Enhanced by Zemanta