Lets leave Javascript behind

Disclaimer: I am sure Javascript will continue to be supported, and continue even to progress in features and support, regardless of any Managed Web. Some people simply love it, with all the flaws and pitfalls, like a sweet elderly married couple holding onto life for each other.

It’s great what the web industry is doing with ECMAScript, from version 6 we will finally see something resembling classes and modules. But isn’t that something the software industry have had for years? Why do we continue to handicap the web with an inferior language, when there have always been better options? Must we wait another 2-3 years before we get operator overloading in ECMAScript 7?

The .Net framework is a rich standardised framework with an Intermediate Language (IL). The compiler optimisations, toolset and importantly the security model, make it a vibrant and optimised ecosystem which could be leveraged. It could have been leveraged years ago with a bare minimum Mono CLR.

Google Chrome supports native code, however it runs in a separate process and calls to the DOM must be marshalled through inter-process communication methods. This is not ideal. If the native code support was in the same process it would be a good foundation for Mono.

I believe it is possible, perhaps even trivial, to achieve this nirvana of a Managed Web. We just need to take small considered steps to get there, so here’s my plan.

  1. Simple native code in the same process – Javascript is currently executed on the main thread, presumably through the window message pump executing delegates. These delegates can simply forward to managed function delegates. But first we should be able to trigger an alert window through native code which is compiled inside the Google Chrome code base.
  2. Simple mono support – Fire up Mono, provide enough support in a Base Class Library (BCL) for triggering an alert. This time there will be an IL DLL with a class which implements an Interface for start-up.
  3. Fuller API – With the simple milestones above completed, a complete BCL API can be designed and implemented.
  4. Optimisations – For example, enumerating the DOM may be slowed by crossing the Managed/Unmanaged boundary? jQuery-like functions could be implemented in native code and exposed through the BCL.

Along the way, other stacks and browsers could also leverage our work, establishing support for at least Java as well.

Example API:

IStartup

  • void Start(IWindow window) – Called when the applet is first loaded, just like when Javascript is first loaded (For javascript there isn’t an event, it simply starts executing the script from the first line)

IWindow
see http://www.w3schools.com/jsref/obj_window.asp

IDocument
see http://www.w3schools.com/jsref/dom_obj_document.asp

Warm up – Possible disadvantage

Javascript can be interpreted straight away, and there are several levels of optimisation applied only where needed, favouring fast execution time. IL would need to be JIT’d which would be a relatively slow process, but there’s no reason why it cannot be AOT compiled by the web server. Still I see this as the biggest disadvantage that needs to be front of mind.

Other people around the web who want this

http://tirania.org/blog/archive/2012/Sep-06.html

 

Enhanced by Zemanta

University BC

University is becoming increasingly irrelevant for the IT industry.

It’s 3 years of full-time study, yet in a month, a talented 12 year old kid can write an app that makes him a millionaire. Course content is always lagging behind, not for lack of pushing by academics and industry, the bureaucracy of the system drags. With content such as teamtreehouse.com on the up, there is potential for real upset in the IT education market. And without any entrepreneurship support, there is no excitement and potential to build something meaningful from nothing. Increasingly universities will be perceived as the old way, by new students as well as industry looking to hire.

I would like to see cadet-ships for IT, with online content and part-time attendance to training institutions for professional development and assessment. Although even assessment is questionable: Students are not provided access to the internet during assessments, which does not reflect any true-to-life scenario. I value a portfolio of code over grades.

I seek individuals who have experience in Single Page Applications (SPA), Knockout.js, javascript, jquery, Entity Framework,

C#, 2D drawing, graphic art, SQL (Window Functions). Others are looking for Ruby on Rails developers. All of my recent graduates have very limited exposure to any of these.

I could be wrong, but if I am right, institutions who ignore such facts are only going to find out the hard way. I know the IT industry has been reaching out to Universities to help keep them relevant, it’s time for Universities to reach back out to the industry, and relinquish some of their control for the benefit of both students and the industry.

Enhanced by Zemanta

IL to SQL

There are some cases where one needs to perform some more complex processing, necessitating Application-side processing OR custom SQL commands for better performance. For example, splitting one column of comma delimited data into 3 other columns:

public virtual void DoEntityXSplit()
{
  var NeedSplitting = db.EntityXs.Where(x => !x.Splitted1.HasValue);
  foreach (var item in NeedSplitting)
  {
    string[] split = item.DelimitedField.Split(',');
    item.Splitted1 = split[0];
    item.Splitted2 = split[1];
    item.Splitted3 = split[2];
    item.Save(); //THIS...
  }
  db.SaveChanges(); //OR this
}

When you run DoEntityXSplit, the unoptimised code may run. However if supported, automatic optimisation is possible derived from the IL (Instruction Language – aka. .Net bytecode) body of the method, when:
i) The ORM (Object Relational Modelling – eg. nHibernate / EntityFramework) supports some sort of “ILtoSQL” compilation at all; and
ii) The function doesn’t contain any unsupported patterns or references, then the raw SQL may be run. This could include the dynamic creation of a stored procedure even for even faster operation.

public override void DoEntityXSplit()
{
  //This is pseudo SQL code
  db.RunQuery("
    declare cursor @NeedSplitting as (
      select ID, DelimitedField
      from EntityXs
      where Splitted1 is null
    );

    open @NeedSplitting;
    fetch next from @NeedSplitting into @ID, @DelimitedField
    while (@StillmoreRecords)
    begin
      @Splitted1 = fn_CSharpSplit(@DelimitedField, ',', 0)
      @Splitted2 = fn_CSharpSplit(@DelimitedField, ',', 1)
      @Splitted3 = fn_CSharpSplit(@DelimitedField, ',', 2)

      update EntityX
      set Splitted1 = @Splitted1,
      Splitted2 = @Splitted2,
      Splitted3 = @Splitted3
      where ID = @ID

      fetch next from @NeedSplitting into @DelimitedField
    end
  ");
}

of course this could also be compiled to

override void DoEntityXSplit()
{
  //This is pseudo SQL code
  db.RunQuery("
    update EntityX
    set Splitted1 = fn_CSharpSplit(@DelimitedField, ',', 0),
    Splitted2 = fn_CSharpSplit(@DelimitedField, ',', 1)
    Splitted3 = fn_CSharpSplit(@DelimitedField, ',', 2)
    where Splitted1 is null
  ");
}

but I wouldn’t expect that from version 1 or would I?

Regardless, one should treat IL as source code for a compiler which has optimisations for T-SQL output. The ORM mappings would need to be read to resolve IL properties/fields to SQL fields. It may sounds crazy, but it’s definitely achievable and this project looks like a perfect fit for such a feat.

Where will BLToolKit be in 10 years? I believe ILtoSQL should be a big part of that future picture.

If I get time, I’m keen to have a go, it should be built as a standalone dll which an ORM can leverage. Who knows maybe EF will pick this up?

Enhanced by Zemanta

Poppler for Windows

I have been using the Poppler library for some time, over a series of various projects. It’s an open source set of libraries and command line tools, very useful for dealing with PDF files. Poppler is targeted primarily for the Linux environment, but the developers have included Windows support as well in the source code. Getting the executables (exe) and/or dlls for the latest version however is very difficult on Windows. So after years of pain, I jumped on oDesk and contracted Ilya Kitaev, to both compile with Microsoft Visual Studio, and also prepare automated tools for easy compiling in the future.

So now, you can run the following utilities from Windows!

  • PDFToText – Extract all the text from PDF document. I suggest you use the -Layout option for getting the content in the right order.
  • PDFToHTML – Which I use with the -xml option to get an XML file listing all of the text segments’ text, position and size, very handy for processing in C#
  • PDFToCairo – For exporting to images types, including SVG!
  • Many more smaller utilities

Download

Latest binary : poppler-0.51_x86.7z

Older binaries:
poppler-0.50_x86.7z
poppler-0.49_x86.7z
poppler-0.48_x86.7z
poppler-0.47_x86.7z
poppler-0.45_x86.7z
poppler-0.44_x86
poppler-0.42.0_x86.7z
poppler-0.41.0_x86.7z
poppler-0.40.0_x86.7z
poppler-0.37_x86.7z
poppler-0.36.7z
poppler-0.35.0.7z
poppler-0.34.0.7z
poppler-0.33.0.7z
poppler-0.26.4.7z
poppler-0.26.3.7z
poppler-0.26.1_x86
poppler-0.26.1
poppler-0.22.0
poppler-0.18.1

Man-made Gaia

see: http://www.digitaltrends.com/cool-tech/nasa-turns-the-world-into-music/

I’m sure the NASA scientist knew how popular the story would be when he decided to artifically translate the measurements of a scientific experiement into audiable sounds.

The journalist attempts to imply that the audio is a direct feed with no translation. “The fact that the data is sampled at the same rate as a CD isn’t entirely accidental” Those in the audio production business would understand that sampling rate has no resemblance to audible sound, but rather the frequency range of those samples does. And given the samples were taken over a 60 day period, we can be sure that the frequencies were very low and sped up.

I’m not terribly familiar with the EMFISIS project, but from what I hear, the audio sounds like the detection of bursts of charges being expelled from the Sun (affected by the Earths magnetic shield of course). That is, a quantity moving from a low resonance to a higher one. But of course this quantity had to be artificially scaled to be audible  and it’s this scaling which produces the sound of birds in a rainforest. If you listen closely the sound also resembles chipmonks, which is what happens when you record your even your own voice and speed it up.

When scaling the audio, the engineer chose the frequency range carefully to make it sound like the Earth was at harmony with the rainforests. Quite a dishonest representation. The original frequencies are much lower, and applying them to the lowest audible frequencies would have been more justifyable, however this would have resulted in the sound of a group of fat birds singing. Not the effect sought after.

Of course, it may all be in good spirit and fun. But NASA is funded by tax payers in the US, this isn’t science, it is pandering to Pagan Religion, simply read all the comments left on the sound clip. They’re on a spiritual high.

Sasha Burden – Pre-Dictator of Australia

A glimpse into the self-absorbed world of the totalitarian journalist. It happens only rarely, they try to keep their ugly self-moralising core hidden from view, but we occasionally see it rare up it’s repugnant head. Andrew Bolt blogged about an intern, an intern who wrote about her experiences in such a tone that uncovers a sterotypical totalitarian (dictator), who would make everyone think like her. She’s right, everyone else is wrong, she is the sole purveyor of good. Sounds kinda religious doesn’t it?

I was so confronted and appalled by her writing, I couldn’t help but write what I really thought. In so many workplaces and homes people talk about issues, they are free from ears of political correctness, and as a result speak much more honestly and plainly about how they feel and what they’re thinking. Burdens propaganda was a trigger for me, the gap between reality and the political class is so immense and too many people are so afraid to publicly oppose the unrelenting tide of so-called progressivism. Would it hurt their business? Would they lose friends? These people describe a world where opposing views are in minority, despite the polling. The late John Linton from Exetel is one source of inspiration, he opened many a can of worms. Let’s see what really happens.

An obese topless man on a motorcycle. Original...
An obese topless man on a motorcycle. Original caption: “The plague of anorexia must be overcome” (Photo credit: Wikipedia)

So Burden was allowed to sit in at editorial meetings.

Comments in the news conference included
“Of course he’s fat, look at what he eats” and
“How does someone let that happen?”
Was Burden born under a rock? Probably the inner suburbs of Melbourne, close enough. If she entered any real-life work place she would find such comments embedded in Australian culture. In fact, it’s typical of human nature, having a laugh at someone’s expense. It’s honest however, obvious and most importantly, free speech. Would Burden have such speech suppressed in the workplace?
It’s a bit like calling a white skinned person a, “whitefella”, they have white skin! But Burden doesn’t appear to want people telling the truth now, oh no, she would have people cower in the dark ages, with her seasoned judgement marking heresies in the workplace.

Her suppression of the truth continues:

This image was selected as a picture of the we...
This image was selected as a picture of the week on the Czech Wikipedia for th week, 2007. (Photo credit: Wikipedia)
…a female journalist bizarrely
insisted that an article debating the benefits
of chocolate should be written by a female: “A
woman needs to say chocolate is good.”
Burden, it’s a well known fact that women openly adore chocolate (they’ve come out of the cupboard), and respond well to other women’s opinions. It’s a well known fact that Woman are physically and even mentally different to Men. Real people have no problems with this, and embrace it. Burden would rather dictate the opposite to society.
She is clearly quite easily offended, by the mundane
 …a potential story on a trans person with him.
His remarks included, “He? She? It?” “There
has to be a photo of it” and “You should put
the heading—‘My Life As A She-Man!’ or
‘G-Boy.’” No one in the newsroom reacted.
That's So Gay
That’s So Gay (Photo credit: Wikipedia)

Of course, I expected her to pass holy judgement on an opinion of gay marriage

…moved from transphobia to homophobia on the
eighth day, commenting on a recent piece
on gay marriage. “Why are they [the gay
community] making such a fuss? It’s been
this way for millennia, why change now?”
How dare an individual have an opinion! He should relinquish his persona and join the (minority) collective! This is the poster issue of people like Burden, disinterested in how people feel, working their hardest to engineer opinions over decades with unrelenting pontificating. It’s the epitome of self-righteousness, her ego towering above the peasant minds of society.
She readily misinterprets kindness
Men were also continuously and unnecessarily
sexist, waiting for me to walk through doors
and leave the elevator before them
It’s quite possible this workplace holds doors for females, but who cares? Most likely, men also hold doors for males too, and it’s her narrow minded philosophy which blinded her to the truth. Generally, I’ve found females not to be as logical as males, often more concerned with a strongly held opinion, quite emotional, different from males. It’s ironic, that it’s her type, who insist we accept people for who they are and not try to change them, yet I see women being bullied into taking a career, and on the other hand hear of women regretting not having children when they were younger, wish they could have mothered their children. It is frustrating at times to see such busybodies, nosey about everyone else’s business.
Compasses
Compasses (Photo credit: Wikipedia)

So why would someone be like Sasha Burden. It’s difficult from my vantage point to guess, I don’t know her personally, but I can assume. I don’t need to speculate on her past, her upbringing or her external influences. I can guess from how she stands today, and guess her aspirations. She wants to be seen as a purely moral, taking up the cause of redefining the concept of morality with her own values. She does not like testing her assumptions, and once she has taken a side on an issue she will fight despite the facts, she will never concede defeat. I doubt such a person can ever change, but I sure hope I never become so arrogant. I hope she fails her inquisition, and never fully imposes her incoherent morality on Australians. I hope more people can stand up against people like her, it’s hard because she purports to stand for morality, she has the high ground.

Enhanced by Zemanta

We will never meet a space-exploring or pillaging Alien

The thought of Aliens capture the imagination, countless worlds beyond our own with advanced civilizations. But I have strong suspicions that we will never meet an Alien. I’ve always had my doubts, and then I read an article recently which uses very sound reasoning to preclude their existence (I don’t have the reference for the specific one).

DON’T EXIST

It basically goes:

  1. The Universe is roughly 13 billion years old – plenty of time for Aliens to develop technology
  2. The Universe is gigantic – plenty of places for various Aliens to develop technology
  3. We would want to find other Aliens – Other Aliens would want to also look for other life
  4. Why haven’t they found us? Why haven’t we found them?
  5. Because they don’t exist
When we first started surveying space and searching for Aliens we would have found them, they, as we do, would have been transmitting signals indicating intelligence.
NEVER MEET
But there is also another, less compelling, reason. The Universe appears to be expanding, and accelerating that expansion. Unless worm-hole traversal is found to be practically feasible, the whole meeting part will never happen.
OTHER REASONS
Here’s some more links to other blogs and articles I found, which also add some more information and other reasons which logically prove that Aliens don’t exist:
I guess, that one or even several logical reasons cannot prove absolutely that Aliens do not exist, we can only be 99.9% or more confident for example. Unless we search all the cosmos and conclude that none exist, can it be an absolute fact. We could have an Alien turn up tomorrow, and explain they have searched the Universe and only just recently found us, and that it’s only them and us and that their home world is hidden behind another galaxy or nebula or something. So logic alone is not definitive, but it is certainly a good guide if the logic itself is not disproven.
Take Fermat’s Last Theorem for example, it was proven, “358 years after it was conjectured”. There were an infinite amount of solutions to the problem, and so an exhaustive evaluation was not practical, a mathematical verification was required. Many believed it to be true of course, but Mathematics being a science, required proof.
So unless we can prove that Aliens don’t exist with scientific observation, and not just with probability, one cannot say with authority that Aliens don’t exist, but at the same time, one definately cannot believe that Aliens do exist without significant proof.

Windowing functions – Who Corrupted SQL?

I hate writing sub-queries, but I seem to hate windowing functions even more! Take the following:

select
PR.ProfileName,
(select max(Created) from Photos P where P.ProfileID = PR.ID) as LastPhotoDate
from Profiles PR

In this example, I want to list all Profile names, and also include a statistic of the most recent uploaded photo. It’s quite easy and looks a little bloated, but compared to windowing functions, it is slower. Let’s have a look at the more performant alternative:

select
PR.ProfileName,
max(Created) OVER (PARTITION BY PR.ID) as LastPhotoDate
from Profiles PR
join Photos P
on P.ProfileID = PR.ID

That’s actually quite clear (if you are used to using windowing functions) and performs better. But it’s still not ideal, coders now need to learn about OVER and PARTITION just to do something seemingly trivial. SQL has let us down. It looks like someone who creates RDBMS’s told the SQL comittee to add windowing functions to the SQL standard, it’s not user friendly at all, computers are supposed to do the hard work for us!

It should look like this:

select
PR.ProfileName,
max(Created)
from Photos P
join Profiles PR
on PR.ID= P.ProfileID
Group By PR.ID --or Group By PR.ProfileName

I don’t see any reason why an RDBMS cannot make this work. I know that if a person gave me this instruction and I had a database, I would have no trouble. Of course, if different partitioning is required within the query, then there is the option for windowing functions, but for the stock standard challenges, keep the SQL simple!

Now what happens when you get a more difficult situation? What if you want to return the most recently uploaded photo (or at least the ID of the photo)?

--Get each profiles' most recent photo
select
PR.ProfileName,
P.PhotoFileName,
P.PhotoBlob
from Photos P
join Profiles PR
on PR.ID= P.ProfileID
join (
select ProfileID, max(Created) as Created
from Photos
group by ProfileID
) X
on X.ProfileID = P.ProfileID
and X.Created = P.Created

It works but it’s awkward and has potential for performance problems. From my limited experience with windowing functions and short search on the web, I couldn’t find a windowing function solution. But again, there’s no reason an RDBMS can’t make it easy for us, and again the SQL language should make it easy for us!

Why can’t the SQL standards group innovate? Something like this:

select
ProfileName,
P.PhotoFileName,
P.PhotoBlob
from Photos P
join Profiles PR
on PR.ID= P.ProfileID
group by ProfileID
being max(Created) --reads: being a record which has the maximum created field value

And leave it to the RDBMS to decide how to make it work? In procedural coding with a set, while you are searching for a maximum value you can also store the entity which has that maximum. There’s no reason this can’t work.

It seems the limitation is the SQL standardisation body. I guess someone could always implement a work around, create a plugin for opensource SQL query tools, as well as opensource functions to convert SQL+ [with such abilities as introduced above] to SQL.

(By the way I have by no means completely thought out all the above, but I hope it describes the spirit of my frustrations and of the possible solution – I hope some RDBMS experts can comment on this dilemma)

Mining Space

It’s quite an aspirational idea – to even look at mining asteroids in space. It may feel like it’s something unreachable, something that’s always going to be put off to the future. But the creation of a new company Planetary Resources is real, with financial backers and with a significant amount of money behind them. We’re currently in transition. Government, and particularly the U.S. government is minimizing its operational capacity for space missions, while the commercial sector is being encouraged and growing. For example, Sir Richard Branson’s, Virgin Galactic, as well as other organisations are working toward real affordable (if you’re rich..) space tourism and by extension commoditisation of space access in general, bringing down prices and showing investors that space isn’t just for science anymore, you can make a profit.

I recently read a pessimistic article, one where the break-even price for space mining is in the hundreds of millions of dollars for a given mineral. One needs to be realistic, however in this article, I think the author is being way too dismissive. You see, there are many concepts in the pipeline which could significantly reduce the cost of earth-space transit. My most favored is the space elevator, where you don’t need a rocket to reach kilometers above the earth (although you would likely still need some sort of propulsion to accelerate to hold in orbit).

But as well as being across technology, a critic needs to also be open to other ideas. For example, why bring the minerals back to Earth? Why not attempt to create an extra-terrestrial market for the minerals? It may well cost much more to launch a large bulk of materials into orbit, than to extract materials from an asteroid (in the future). With space factories building cities in space.

Of course, I still think space mining is hopeful at best, let’s balance despair with hopeful ideas.

Enhanced by Zemanta

Introducing OurNet – A community project

I’ve kept it under wraps for a while, but hope to make this a more public project. This lowers any prospects of me making any money from it, but does make it more likely that I will see it happen.

While campaigning and thinking laterally on the issues and technologies of the NBN. I devised some interesting aspects configurations of wireless and networking devices, which could create a very fast internet for very little cost.

Such aspects alone provide little improvement on their own, but together can form an innovative commercialisable product.

Aspect 1 – Directional links are fast and low noise

Nothing very new here. They are used for backhaul in many situations, in industry, education, government and commercially. Use a directional antenna and you can focus all the electromagnetic radiation. On its own this cannot create a 10Gbps internet system to each home for a commercialisable price.

Aspect 2 – Short directional links can be made very fast

Lots of research in this domain at the moment. Think of the new BlueTooth standard, WiGig, UWB and others, all trying to help reduce wires in the home and simplify media connectivity. If rather than connecting house to local aggregation point, we connected house to house, a short link technology could be employed to create links in the order of 10Gbps between houses. But on it’s own this does not create a low latency network. With all those houses latency would add up.

Aspect 3 – Mesh network node hop latency can be driven to practically 0

When you think wireless mesh, you think of WiFi systems. Don’t. I have devised two methods for low latency switching across a mesh. Both involve routing once to establish a link (or a couple of links).

The first establishes a path, by sending an establishment packet with all the routing information, which is popped on each hop, also containing the pathid. Then subsequent data packets include the pathid to be switched (not routed) to the destination. The first method requires buffers.

The second establishes a path, similar to the first method, excepting that rather than a pathid, the node reserves timeslices, at which point the correct switching will occur, a bit like a train track. This one can potentially waste some bandwidth, particularly with guard intervals, however the first method can supplement this to send packets in even the guard intervals, and unreserved timeslices. The second method does not require buffers.

The second method is best for reserving static bandwidth, such as for a phone or video call, or for a baseload of internet connectivity, so HTTP requests are very responsive. The first is for the bursting above the minimum baseload of connectivity.

There is a third (and fourth and fifth – multi-cast and broadcast) method which can be for very large chunks of data which can simply be sent with the route, or routed on demand. Such a method might be completely eliminated though, with too much overlap with the first.

This can be implemented initially with an FPGA, and other components, such as a GPS module for accurate timing (or kalman filtered multiple quartz system). And eventually mass produced as an ASIC solution.

Aspect 4 – Total mesh bandwidth can be leveraged with distributed content

If every house has 4 links of 10Gbps, then you can see how quickly the total bandwidth of the mesh would increase. However this total bandwidth would be largely untapped if all traffic had to flow to a localised point of presence (PoP). That would be the potential bottleneck.

However one could very easily learn from P2P technologies. And the gateway at the PoP could act as a tracker for distributed content across the mesh. Each node could be capable of storing terabytes of data very cheaply. So when you go to look-up The Muppets – Bohemian Rhapsody, you start getting the content stream from YouTube, but then it’s cut after you have established a link to the content on the mesh.

Problems

There are some problems in this grand plan to work through, but it will only be a matter of time for such solutions to be found.

The first is finding the perfect short links. Research thus far has not been on developing a link specifically for our system, at the moment we would be re-purposing a link to suit our needs, which is completely viable. However to gain the best performance, one would need to initiate specific research.

The second is installation, we need to find the best form factor and installation method for each node onto houses. I anticipate that a cohesive node is the best option, all components, including the radios and antenna on the same board. Why? Because every-time you try to distribute, you need to go from our native 64bit communication paths to a Ethernet or RF over SMA etc… Gaining latency and losing speed and/or gaining noise. However, by having everything together, you increase the distance between nodes. One possible compromise could be to use waveform conduit to carry the various links closer to the edge of the house, capped with plastic to prevent spiders getting in.

The final, is a subjective problem. That is of power consumption. However this is a mute point for various reasons. For one, the node can be designed for low power consumption. Secondly, the link technologies need not be high power devices. I’ve seen some near-IR transmitters just come out (not commercially yet), which cost about a dollar each (for the whole lot), and can reach speeds of 10Gbps, and are very low power. Finally, with the falling cost of solar panels, one could incorporate them into the node (with battery), to lower installation costs.

Opportunities / Disruptions

A new paradigm in internet:

OurNet is the current name for a reason. People own their own mobile phone and pay it off on plan. With OurNet you can own your own node, and pay it off on a plan. But in addition, the node your own becomes a part of the actual network as well as the connection to it. It forms the backhaul AND the lastmile! Hence it being everyone’s net. Such a shift in ownership is sure to have some sort of impact on consumers, a new way of accessing the internet and a new paradigm of belonging/contributing.

And of course OurNet achieves fixed-line like infrastructure with wireless technology. Individuals, Businesses and datacentres could have four (and even more) redundant links, with each link able to have multiple redundant paths. This is not possible with the current (domestic) hierarchical model of the internet, where one needs to subscribe to multiple vendors to achieve redundancy costing $$$. You could reliably host your own website from your home or business, to the world!

Faster Mobile communications

With a node on every house, mobile communication can become very fast and have very low contention. Each node can be equipped with an omni-directional antenna for short range communication. In addition a beam-form directional antenna or MIMS turnable antenna can supplement or replace the omni-directional antenna allowing for very high speed, low noise links to mobile communicators.

High Precision Positioning

OurNet is made up of fixed position nodes with super high resolution timing. If this is leveraged, GPS systems can be enhanced to the millimeter opening up further opportunities for driverless cars and the like (they already work well with visual object detection and lidar, but an additional reference point can’t hurt).

Enhanced by Zemanta

Carbon Tax and EVs

Just a quick one…

There are many reasons Electronic Vehicles (EV) are becoming more popular. Improvements in technology for greater range, production by bigger companies lowering prices. But there is one major driving factor, running cost.

The high and at times, rising price of fossil fuels makes consumers look elsewhere, and think outside their usual comfort zone. Electricity is cheap. Because of this, technology is being researched and major car companies are adjusting.

So what will happen when a Carbon Tax comes into Australia, one which doesn’t increase petrol prices, yet does electricity. Now, I don’t subscribe to the Global Warming scare, I’ve personally read a few papers and read through plenty of commentary to understand that this is a hoax.

However, it seems a contradiction to create regulation which will adversely affect the market, making consumers less likely to choose a “greener” option. (In my opinion EVs are not just cheaper to fuel, but also a lot cheaper to maintain – no engine oil, tuning, timing belt, radiator, etc…).

Enhanced by Zemanta

Are sciences becoming to Philosophical?

Logic and Reason are powerful things and great for debate, however it is also dangerous in the absence of facts. Just because one can reason with logic about an issue, doesn’t mean it is true.

These thoughts are of course provoked somewhat by recent scientific news and debate, particularly on  A Universe From Nothing, but also Anthropogenic Global Warming.

In A Universe From Nothing (UFN), eminent scientists (physicists and cosmologists) put forward the models and analogies of a Universe (and particularly the Big Bang) being completely viable to appear from nothing, without a spiritual force, such as God.

Their theories do make sense, they are well reasoned. From M-Theory (String Theory) to Multi-Verse. There are plenty of models which can describe their hypothesis.

However in the absence of empirical data, observation, there is no way to verify such hypothesis. Just because you have a complex theory which fits together nicely doesn’t mean you have found an objective truth.

In the article, the author writes

can almost put under a lab microscope.

Now, you can either observe or not observe. There is no, “nearly observe”. In M-Theory they have all but ruled out the possibility of observing the theoretical vibrating strings at the center of matter. Perhaps graviton particles can move between dimensions? Perhaps we can observe them? It’s all inconsequential if it cannot be proven. And may as well be called a philosophical statement.

Sometimes complex philosophical arguments, are seemingly easy to break with simpler logic and reason. Sometimes, these so called scientists can get carried away, perhaps believing their logic trumps common sense. Take this quote for example

Indeed, you might ask why it is that we think there is something here at all

Every individual is self-aware, alive. Something (matter) is here. Why make such assertions suggesting there’s nothing (matter) here at all? At best it is a poor analogy to use to frame their theories. And how are they productive? Of course one needs to devise hypothesis, but until there is proof or a pathway to finding proof, why publish such hypothesis? You have to wonder whether these people can indeed be called scientists or fathers of a new religion.

For the Sydney Morning Herald to publish this ridiculous stuff, it must be a slow news day.

Enhanced by Zemanta

What the… Payroll tax?

I didn’t really notice the GST debate, except being annoyed at all prices increasing when GST was introduced (I was in High School). It turns out the a major reason for it’s introduction was to eliminate many state taxes. One of these taxes being Payroll tax….

Have a look: http://www.sro.vic.gov.au/sro/SROnav.nsf/LinkView/8AFF7B9FB4EB3733CA2575D20022223D5DB4C6346AF77ABBCA2575D10080B1F7

It turns out that if I employ too many people I will have to pay the state 4.9% tax on all the gross wages paid to my employees – including Superannuation! Not only is this a disincentive to employ, it’s also yet another administrative burden which limits growth. I hear it all the time, that ultimate success requires flexibility and scalability – Payroll tax is an ugly and unnecessary burden.

Sure we can’t just pull such revenue out from under the states, but it can be replaced with revenue from another more efficient tax – such as GST. At just 10% our GST is relatively low compared to other countries, in Europe some countries have a GST or VAT of 25%.

So why not simply increase GST? Consumers, AKA voters are the end-users and effectively the ones who pay the tax. Even though consumers can ultimately pay less in the long run, because the companies no longer need to pay payroll tax, the whole economy changes. Smaller business that didn’t previously pay Payroll tax are effectively charging their customers more, because they cannot discount from regained revenue from a dropped tax. Small changes to the rate over a long time may work best with matched reductions in payroll tax in the states. But in summary GST rate increases are political poison for non-business owning voters.

Another issue is fraud. As GST increases, the returns on VAT fraud become greater. Countries such as Sweden (25%) and the UK (20%) are subjected to simple but hurtful frauds which effectively steal from GST revenue. It basically works by having a fake company be liable to pay GST, and a legitimate company entitled to the return. The fake company goes bankrupt. As the GST rate increases, the amount of payback to such frauds increases, encouraging more incidents. It seems that any macro economic change, either short term (Government Stimulus) or long term (Tax Reform), opens the door for corruption and rorting. If the GST rate is to be increased, the right legislation needs to be in place to prevent such fraud.

So in the end the ultimate way for a business to overcome Payroll tax is to innovate good products which provide a comfortable return and innovate inside the business to improve internal efficiency, reducing the need to hire as many staff resulting in the ability to maintain a competitive edge.

DIDO – Communication history unfolding?

First of all I just want to say – THIS MAY BE HUGE!!

I read this article last night: http://www.gizmodo.com.au/2011/08/dido-tech-from-quicktime-creator-could-revolutionise-wireless-broadband/

In plain english, a company has discovered a way to dramatically improve mobile internet. It will be 5 – 10 years before it’s commercialised, however I believe it will happen sooner, with many realising just how revolutionary it will be, investing more money, attracting more resources to get it done sooner.

I am not a representative of the compnay, but have been involved in understanding and pondering wireless technology, even coming up with faster and more efficient wireless communication concepts, but none as ground-breaking as this one. I don’t claim to know all the details for certain, but having read the whitepaper I beleive I can quite accurately assume many details and future considerations. Anyway I feel it’s important for me to help everyone understand it.

How does it work (Analogy)?

Imagine walking down the street, everything is making noise, cars, people, the wind. It’s noisy, someone in the distance is trying to whisper to you. Suddenly all the noise dissappears, and all you can hear is that person – clearly. This is because someone has adjusted all the sounds around you to cancel out, leaving just that persons voice.

How does it work (Plain English)?

When made available in 5-10 years:

  • Rural users will have speeds as fast as in CBDs, receving signals from antennas as far as 400km away!
  • In cities there will need to be several antennas in within ~60km of you
    • today there are so many required for mobile internet, the number will be reduced…
    • the number of antennas per mobile phone towers and buildings will be reduced to just one.
  • there will be a central “server” performing the mathematical calculations necessary for the system.

The most technical part (let’s break it down):

  1. Unintended interference is bad (just to clarify and contrast)…
  2. DIDO uses intereference, but in a purposeful way
  3. DIDO uses multiple antennas, so that at a particular place (say your house), they interfere with each other in a controlled way, leaving a single channel intended for you.
  4. It’s similar to how this microphone can pick up a single voice in a noisy room – http://www.wired.com/gadgetlab/2010/10/super-microphone-picks-out-single-voice-in-a-crowded-stadium/
    but a little different…

How does it work (Technical)?

I have been interested in two related concepts recently:

  1. Isolating a single sound in a noisy environment – http://www.wired.com/gadgetlab/2010/10/super-microphone-picks-out-single-voice-in-a-crowded-stadium/
  2. I saw an interview with an ex-Australian spy who worked at a top secret facility in Australia in co-operation with the US. The guy was releasing a book revealing what he can. From this facility he spied on radio communications around the world. I wondered how and then figured they likely employ the “super microphone” method.

When I heard about this technology last night, I didn’t have time to look at the whitepaper, but assumed the receivers may have “super microphone” sort of technology. It turns out the inverse (not opposite) is true.

Scenario:

User A’s radio is surrounded by radios from DIDO. The DIDO server calculates what signals need to be generated from the various radios such that when converging on User A, they “interfere” as predicted to leave the required signal. When there are multiple users the mathematical equations take care of working out how to converge the signals. As a result, the wireless signal in the “area of coherence” for the user, is as if the user has the full spectrum 1:1 to an external wireless base station.

Implications for domestic backhaul

There would need to be fibre links to each of the antennas deployed, but beyond that remaining backhaul and dark fibre will rapidly become obsolete. DIDO can reach 400km in the rural mode, bouncing off the ionosphere and still maintaining better latency than LTE at 2-3ms.

Physical Security?

We hear about quantum communication and the impossibility to decipher the messages. I believe a similar concept of physical security can be achieved with DIDO. Effectively DIDO provisions areas of coherency. Areas in 3D space where the signals converge, cancelling out signal information intended for other people. So effectively you only physically receive a single signal on the common spectrum, you can’t physically see anyone else’s data, unless you are physically in the target area of coherency. This however, does not mean such a feature enables guaranteed privacy. By deploying a custom system of additional receivers that can sit outside the perimeter of your own area of coherency, you can sample the raw signals before they converge. Using complex mathematics and empowered with information of the exact location of the DIDO system antennas, one would be theoretically able to single out the individual raw signals from each antenna, and the time of origin and then calculate the converged signal at alternative areas of coherence. This is by no means a unique security threat. Of course one could simply employ encryption over their channel for secrecy.

This doesn’t break Shannon’s law?

As stated in their white paper, people incorrectly apply the law to spectrum rather than channel. Even before DIDO, one could use directional wireless from a central base station and achieve 1:1 channel contention (but that’s difficult to achieve practically). DIDO creates “areas of coherency” where all the receiving antenna picks up is a signal only intended for them.

Better than Australia’s NBN plan

I’ve already seen some people attempt to discredit this idea, and I believe they are both ignorant and too proud to give up their beloved NBN. I have maintained the whole time that wireless technology will exceed the NBN  believers interpretation of Shannon’s law. Remember Shannon’s law is about *channel*, not *spectrum*. DIDO is truly the superior option, gigabit speeds with no digging! And clearly a clear warning that governments should never be trusted with making technology decisions. Because DIDO doesn’t have to deal with channel access, the circuitry for the radios is immensely simplified. The bottleneck will likely be the ADC and DACs, of which NEC has 12bit 3.2Giga-sample devices (http://www.physorg.com/news193941421.html). So multi-terabit and beyond is no major problem as we wait for the electronic components to catch up to the potential of wireless!

CRITICISMS UPDATE:

  • One aspect to beware of is the potential need for 1:1 correlation of antennas from the base station and users. I can’t find any literature yet which either confirms or denies such a fixed correlation. But the tests for DIDO used 10 users and 10 antennas.
  • If there must be one antenna per user this idea isn’t as earth shattering as I would hope. However there would still be relevance. 1) It still achieves 100% spectrum reuse, 2) all the while avoiding the pitfalls of centralised directional systems with beam-forming where obstacles are an issue. 3) Not to mention the ability to leverage the ionosphere for rural applications – very enabling.
  • After reading the patent (2007) – I see no mention of the relationship between AP antennas and the number of users. However I did see that there is a practical limit of ~1000 antennas per AP. It should be noted that if this system does require one antenna per user, it would still be very useful as a boost system. That is, everyone has an LTE 4G link and then when downloading a video, get the bulkier data streamed very quickly via DIDO. (The amount of concurrent DIDO connections being limited by the number of AP antennas)
  • The basis for “interference nulling” discussed in 2003 by Agustin et al. http://citeseerx.ist.psu.edu/viewdoc/summary?doi=10.1.1.10.2535
  • Removed many  ! at the top, to symbolise the potential for disappointment.
  • Hey there’s a spell check!
  • Have a look here for whirlpool discussion: http://forums.whirlpool.net.au/forum-replies.cfm?t=1747566

Memorable IPv6 Addresses

Back in Nov 2009, I foresaw that IPv6 addresses would become a menace to memorise. So I had a crack at improving memorability of such addresses, See http://blog.jgc.org/2011/07/pronounceable-ipv6-addresses-wpa2-psk.html?m=1. The basic idea is that sounds which make up words or resemble words are much easier to remember than individual digits. I was actually thinking about this idea last night, how it could be applied to remembering strong passwords.

This morning I got an email from a collegue who pointed out this: http://www.halfbakery.com/idea/IPv6_20Worded_20Addresses#1260513928. I don’t believe the scheme used here is as memorable as mine, but it sounds like other people are having similar ideas.

Back to my thoughts last night on more memorable passwords. We know we’re supposed to use Upper and Lower case, special symbols etc. But even then you’re not using the full 64bit capacity of the full 8 character recommended string. To use my scheme to memorise more secure passwords, you would simply use my tool.

I made a video 🙂

[youtube=http://www.youtube.com/watch?v=f60GGxPskG4]