Don’t use Resource Strings with C#

I recommend separate static class files with static read-only strings.

Problems:

  • The XML Resource files are hard to source-control
  • The UI for resource strings are hard to scroll through and edit in-place

It’s better when strings are in code-files, with multi-line strings using @”” or $@””. Maybe append “Resources” at the end of the class name, as a convention.

Benefits:

  • You can use any coding techniques with them
  • You can be more cohesive, create multiple separate classes
  • You can have static functions to take and apply parameters
  • You use the normal text editor
  • You can press F12 on a reference, and get directly to editing the string
  • No XML to deal with – less merge conflicts

 

We will never meet a space-exploring or pillaging Alien

The thought of Aliens capture the imagination, countless worlds beyond our own with advanced civilizations. But I have strong suspicions that we will never meet an Alien. I’ve always had my doubts, and then I read an article recently which uses very sound reasoning to preclude their existence (I don’t have the reference for the specific one).

DON’T EXIST

It basically goes:

  1. The Universe is roughly 13 billion years old – plenty of time for Aliens to develop technology
  2. The Universe is gigantic – plenty of places for various Aliens to develop technology
  3. We would want to find other Aliens – Other Aliens would want to also look for other life
  4. Why haven’t they found us? Why haven’t we found them?
  5. Because they don’t exist
When we first started surveying space and searching for Aliens we would have found them, they, as we do, would have been transmitting signals indicating intelligence.
NEVER MEET
But there is also another, less compelling, reason. The Universe appears to be expanding, and accelerating that expansion. Unless worm-hole traversal is found to be practically feasible, the whole meeting part will never happen.
OTHER REASONS
Here’s some more links to other blogs and articles I found, which also add some more information and other reasons which logically prove that Aliens don’t exist:
I guess, that one or even several logical reasons cannot prove absolutely that Aliens do not exist, we can only be 99.9% or more confident for example. Unless we search all the cosmos and conclude that none exist, can it be an absolute fact. We could have an Alien turn up tomorrow, and explain they have searched the Universe and only just recently found us, and that it’s only them and us and that their home world is hidden behind another galaxy or nebula or something. So logic alone is not definitive, but it is certainly a good guide if the logic itself is not disproven.
Take Fermat’s Last Theorem for example, it was proven, “358 years after it was conjectured”. There were an infinite amount of solutions to the problem, and so an exhaustive evaluation was not practical, a mathematical verification was required. Many believed it to be true of course, but Mathematics being a science, required proof.
So unless we can prove that Aliens don’t exist with scientific observation, and not just with probability, one cannot say with authority that Aliens don’t exist, but at the same time, one definately cannot believe that Aliens do exist without significant proof.

Mining Space

It’s quite an aspirational idea – to even look at mining asteroids in space. It may feel like it’s something unreachable, something that’s always going to be put off to the future. But the creation of a new company Planetary Resources is real, with financial backers and with a significant amount of money behind them. We’re currently in transition. Government, and particularly the U.S. government is minimizing its operational capacity for space missions, while the commercial sector is being encouraged and growing. For example, Sir Richard Branson’s, Virgin Galactic, as well as other organisations are working toward real affordable (if you’re rich..) space tourism and by extension commoditisation of space access in general, bringing down prices and showing investors that space isn’t just for science anymore, you can make a profit.

I recently read a pessimistic article, one where the break-even price for space mining is in the hundreds of millions of dollars for a given mineral. One needs to be realistic, however in this article, I think the author is being way too dismissive. You see, there are many concepts in the pipeline which could significantly reduce the cost of earth-space transit. My most favored is the space elevator, where you don’t need a rocket to reach kilometers above the earth (although you would likely still need some sort of propulsion to accelerate to hold in orbit).

But as well as being across technology, a critic needs to also be open to other ideas. For example, why bring the minerals back to Earth? Why not attempt to create an extra-terrestrial market for the minerals? It may well cost much more to launch a large bulk of materials into orbit, than to extract materials from an asteroid (in the future). With space factories building cities in space.

Of course, I still think space mining is hopeful at best, let’s balance despair with hopeful ideas.

Enhanced by Zemanta

SQL-like like in C#

Sometimes you need to build dynamic LINQ queries, and that’s when the Dynamic Query Library (download) comes in handy. With this library you can build a where clause using BOTH SQL and C# syntax. Except for one annoying problem. Like isn’t supported.

When using pure LINQ to build a static query, you can use SqlMethods.Like. But you will find that this only works when querying a SQL dataset. It doesn’t work for local collections – there’s no C# implementation.

My Solution

So I mocked up a quick and dirty like method which would only support a single % wildcard, no escape characters and no _ placeholder. It did the job, but with so many people asking for a solution which mimics like, I thought I’d make one myself and publish it Public Domain-like.

It features:

  • Wildcard is fully supported
  • Placeholder is fully supported
  • Escape characters are fully supported
  • Replaceable tokens – you can change the wildcard (%), placeholder (_) and escape (!) tokens, when you call the function
  • Unit Tested

Downloads:

Adding like support to the Dynamic Query Library – Dynamic.cs

I also modified the Dynamic Query Library, to support like statements, leveraging the new function. Here are the steps required to add support yourself:

1. Add the Like value into the ExpressionParser.TokenID enum

            DoubleBar,
            Like
        }

2. Add the token.id == TokenId.Like clause as shown below into ExpressionParser.ParseComparison()

        Expression ParseComparison() {
            Expression left = ParseAdditive();
            while (token.id == TokenId.Equal || token.id == TokenId.DoubleEqual ||
                token.id == TokenId.ExclamationEqual || token.id == TokenId.LessGreater ||
                token.id == TokenId.GreaterThan || token.id == TokenId.GreaterThanEqual ||
                token.id == TokenId.LessThan || token.id == TokenId.LessThanEqual ||
                token.id == TokenId.Like) {

3. Add the TokenID.Like case as shown below into the switch found at the bottom of the ExpressionParser.ParseComparison() function

                    case TokenId.LessThanEqual:
                        left = GenerateLessThanEqual(left, right);
                        break;
                    case TokenId.Like:
                        left = GenerateLike(left, right);
                        break;
                }

4. Add the following inside the ExpressionParser class (the SQLMethods class need to be accessible, referenced library, or copied source code, using for appropriate namespace)

        Expression GenerateLike(Expression left, Expression right)
        {
            if (left.Type != typeof(string))
                throw new Exception("Only strings supported by like operand");

            return IsLike(left, right);
        }

        static MethodInfo IsLikeMethodInfo = null;
        static Expression IsLike(Expression left, Expression right)
        {
            if (IsLikeMethodInfo == null)
                IsLikeMethodInfo = typeof(SQLMethods).GetMethod("EvaluateIsLike", new Type[] { typeof(string), typeof(string) });
            return Expression.Call(IsLikeMethodInfo, left, right);
        }

5. Change the start of the default switch option according to the code shown in ExpressionParser.NextToken()

                default:
                    if (Char.IsLetter(ch) || ch == '@' || ch == '_') {
                        do {
                            NextChar();
                        } while (Char.IsLetterOrDigit(ch) || ch == '_');

                        string checktext = text.Substring(tokenPos, textPos - tokenPos).ToLower();
                        if (checktext == "like")
                            t = TokenId.Like;
                        else
                            t = TokenId.Identifier;
                        break;
                    }

Example

I use this in my own business system, but I preprocess the LIKE rule, as I have quite a few “AI” rules for bank transaction matching. (You can use like statements directly)

There are many ways to cache, here is how I cache a predicate, looping over the set of AIRules in my DB:

RuleCache[i].PreProcessedPredicate = DynamicQueryable.PreProcessPredicate<vwBankTransaction>(RuleCache[i].Filter); //Change the textbased predicate into a LambdaExpression

And then here is how I use it, looping over the array of cached rules:

bool MatchesRule = DynamicQueryable.Where(x.AsQueryable(), RuleCache[i].PreProcessedPredicate).Any(); //Run the rule

Where `x` is a generic list (not a db query), containing the one record I am checking. (Yes, it would be possible to loop over a larger set [of bank transactions], but I haven’t got around to such a performance improvement in my system – I haven’t noticed any performance issues – it’s not broken).

kick it on DotNetKicks.com

Carbon Tax and EVs

Just a quick one…

There are many reasons Electronic Vehicles (EV) are becoming more popular. Improvements in technology for greater range, production by bigger companies lowering prices. But there is one major driving factor, running cost.

The high and at times, rising price of fossil fuels makes consumers look elsewhere, and think outside their usual comfort zone. Electricity is cheap. Because of this, technology is being researched and major car companies are adjusting.

So what will happen when a Carbon Tax comes into Australia, one which doesn’t increase petrol prices, yet does electricity. Now, I don’t subscribe to the Global Warming scare, I’ve personally read a few papers and read through plenty of commentary to understand that this is a hoax.

However, it seems a contradiction to create regulation which will adversely affect the market, making consumers less likely to choose a “greener” option. (In my opinion EVs are not just cheaper to fuel, but also a lot cheaper to maintain – no engine oil, tuning, timing belt, radiator, etc…).

Enhanced by Zemanta

Are sciences becoming to Philosophical?

Logic and Reason are powerful things and great for debate, however it is also dangerous in the absence of facts. Just because one can reason with logic about an issue, doesn’t mean it is true.

These thoughts are of course provoked somewhat by recent scientific news and debate, particularly on  A Universe From Nothing, but also Anthropogenic Global Warming.

In A Universe From Nothing (UFN), eminent scientists (physicists and cosmologists) put forward the models and analogies of a Universe (and particularly the Big Bang) being completely viable to appear from nothing, without a spiritual force, such as God.

Their theories do make sense, they are well reasoned. From M-Theory (String Theory) to Multi-Verse. There are plenty of models which can describe their hypothesis.

However in the absence of empirical data, observation, there is no way to verify such hypothesis. Just because you have a complex theory which fits together nicely doesn’t mean you have found an objective truth.

In the article, the author writes

can almost put under a lab microscope.

Now, you can either observe or not observe. There is no, “nearly observe”. In M-Theory they have all but ruled out the possibility of observing the theoretical vibrating strings at the center of matter. Perhaps graviton particles can move between dimensions? Perhaps we can observe them? It’s all inconsequential if it cannot be proven. And may as well be called a philosophical statement.

Sometimes complex philosophical arguments, are seemingly easy to break with simpler logic and reason. Sometimes, these so called scientists can get carried away, perhaps believing their logic trumps common sense. Take this quote for example

Indeed, you might ask why it is that we think there is something here at all

Every individual is self-aware, alive. Something (matter) is here. Why make such assertions suggesting there’s nothing (matter) here at all? At best it is a poor analogy to use to frame their theories. And how are they productive? Of course one needs to devise hypothesis, but until there is proof or a pathway to finding proof, why publish such hypothesis? You have to wonder whether these people can indeed be called scientists or fathers of a new religion.

For the Sydney Morning Herald to publish this ridiculous stuff, it must be a slow news day.

Enhanced by Zemanta

Civilisation Manual

Lakshadweep, comprising tiny low-lying islands...
Image via Wikipedia

What would happen if an asteroid struck our planet and left a handful of people to restart civilisation? Or if you and few people washed up on an uninhabited island with nothing but the shirt on your back? Many would picture building huts, scavenging for food, starting some basic crops if possible. But that would be it, the limit. You wouldn’t comprehend completely rebuilding civilisation and luxuries available as they are today. But I do, I’m curious, what would it take? If all you could take with you was a book, what would be written in that book, what does the Civilisation Manual say?

Whenever there is talk of civilisation it seems that all you hear is philosophy, but seldom the practicality of achieving it. I assert that the creation of such a Civilisation Manual would be a useful undertaking, not so much for its hypothetical uses, but rather for the ability to teach how modern economies work. I believe that such a book should be able to contain all, if not more, information taught to children in a school. Such a book might be very large.

There would also be additional questions to be said of the hypothetical end of the world scenario. How long would it take

LONDON, ENGLAND - FEBRUARY 21: The sign for t...
Image by Getty Images via @daylife

to rebuild a civilisation to current day technology? What tools would most quickly speed up the process? Is there a minimum amount of people required for this to work? What level of intelligence is required to execute? Just one genius? How long until the female primeval desire for shopping is satisfied? And the perfect shoe manufactured?

Encyclopaedia Beliana 1
Image via Wikipedia

I would love to see a community website started to collect such information. We already have Wikipedia, but you are not told the intimate detail of how to find iron ore, how to cast iron, how to produce flour from wheat or how to build a crude resistor or capacitor to help you make more refined components. It is this knowledge which is hard to find, perhaps we are forgetting how we build a digital civilisation.

Also, given the opportunity to build a civilisation from scratch, there may be some interesting ideas which could be included, never encountered in history before. For example, the book could focus on automation, relieving the humans from hard and repetitive tasks. This could go even further than what is achieved today. In 10 years, perhaps robots will be washing and ironing clothes, cooking meals, etc..

What a Civilisation Manual should NOT contain:

  • Advertising
  • References to Gilligan’s Island
  • Everything – put in the most useful and if you have time add more.

What a Civilisation Manual should contain:

  • Very brief justifications of suggestions – it’s not a history book, it’s a survival book. It’s good to reassure the reader of the thought which goes into each of the suggestions in the book. Such as, if X happens to a person, cut their leg off. Briefly describing blood poisoning might be more reassuring.
  • Tried and tested procedures and instructions – can a 10-year-old kid work it out, or does it require an academic professor? and do you replace the palm frond roof monthly or yearly?
  • Many appendices:
    • A roadmap to digital civilisation – showing a tree of pre-requisite steps and sections on achieving each of the steps.
    • Recipes – Particularly useful when all you’ve got is coconuts and fish. How do you clean a fish?
    • Inter-language Dictionary – who knows who you’ll be with.
    • Plant Encyclopaedia – Identification of and uses for plants.
    • Animal  Encyclopaedia – Do I cuddle the bear?
    • Health Encyclopaedia – How do I deliver the baby?

And an example of chapters:

  • Atomic coffee maker designed by Giordano Robbiati
    Image via Wikipedia

    Something like “Don’t panic, breathe… you took the right book, in 5 years you’ll have a coffee machine again”

  • Chapter 1: Basic Needs – You’ll find out about these first, food, water, shelter.
  • Chapter 2: Politics and Planning – Several solutions for governing the group should be provided to choose from, a bit like a glossy political catalogue. It won’t contain things like Dictatorship, Monarchy. More like Set Leader, Rotating Leader or The Civilisation Manual is our leader. Planning will mostly be pre-worked in the appendix, where technology succession is described with expected timelines for each item.
  • Chapter 3: Power  – No not electricity, power. This section explains its importance and how to harness power, from wind/water for milling to animals for plowing. Of course the progression of civilisation would eventually lead to electricity.
The book should also contain several pencils, many blank pages and maybe we could sneak it a razor blade. This doesn’t break the rules of only being allowed to have a book. Publishers are always including CD’s and bookmarks…
I think it would be interesting anyway…
Enhanced by Zemanta

Robots – the working class

Rage Against the Machine
Image via Wikipedia

I have found myself considering whether doom would really befall the world if we mass employed robots to do all of our dirty work. Would we be overrun by machines which rose up and challenged their creators? Would our environment be destroyed and over polluted? I think not. In fact our lives would be much more comfortable and we would have a lot more time.

Life on earth got a lot better around the 1800s, the dawn of the industrial age. In the two centuries following 1800, the world’s average per capita income increased over 10-fold, while the world’s population increased over 6-fold [see Industrial Revolution]. Essentially, machines, aka. very simplistic robots made human lives much better. With steam power and improved iron production, the world began to see a proliferation of machines which could make fabrics, work mines, machine tools, increase production of consumables, enable and speed up the construction of key infrastructure. Importantly, it is from the industrial revolution from which the term Luddite originated, those who resisted machines because their jobs were offset.

We now find ourselves 200 or so years later, many of us in very comfortable homes, with plenty of time to pursue hobbies and leisure. There does however, remain scope for continued development, allowing machines and robots to continue to improve the lives of people. It is understood that one or more patents actually delayed the beginning of the industrial age, and of course is why I advocate the Technology Development Zones which have relaxed rules regarding patents. However, I believe there is a very entrenched Luddite culture embedded into society.

Now being the organiser of the campaign NBNOptions.org, I have been accused of being a Luddite myself. However no progress has lasted without a sound business case. Furthermore, Luddites of the industrial revolution were specifically those put out of business by the machines.

Therefore the current Luddites are currently or potentially:

  • The Automotive Industry status quo. – Movement to Electric Cars will make hundreds of thousands redundant. Consider how simple an electric car is {Battery, Controller, Motor, Chassis, Wheels, Steering}, and how complicated combustion engines are with the addition and weight of the radiator, engine block, oil, timing, computer,… And all the component manufacturers, fitters, mechanics and further supporting industries that will be put out of business.
  • The Oil industry (and LN2) – Somewhat linked to the Automotive industry. Energy could very well be transmitted through a single distribution system – electricity – at the speed of light. No more oil tankers, no more service stations, no more oil refineries, no more oil pipelines, no more oil mining, no more petrol trucks, no more oil spills. (The replacement for oil needs to be as economical or more economical – no ideologies here).
  • Transport industry – Buses, Trains, Trucks, Taxis, Sea Freight and even air travel all currently employ many thousands to sit in a seat and navigate their vehicle. Technology exists to take over and do an even better job. It’s not just the safety concerns delaying such a transition but also the Luddites (and patent squatters).
  • Farming – The technology is possible. We could have economical fruit picking machines, and many mega farm operations already have automatic harvesters for grain. Imagine all those rural towns having already under threat of becoming ghost towns having to contend with technology taking replacing hard workers.
  • Manufacturing – Is already very efficient, but we still see thousands of people on production lines simply pressing a button. Most manufacturing jobs could be obliterated with only one or two required to overlook a factory – how lonely.
  • House Wifes – Are possibly not Luddites, given many would relish even more time for leisure and their family, however so many of their tasks could be completely centralised and automated. Cooking and associated appliances could be completely abolished, why buy an oven, dishwasher, sink, fridge, freezer, cupboards, dinnerware, pots, pans, stove, and then spend 1-2 hours a day in the kitchen and supermarket when you could potentially order your daily meals from an industrial kitchen where all meals are prepared by robots for a fraction of the cost and time?
  • Construction – It’s amazing how many people it takes to build a skyscraper or house. Why does it still require people to actually build them? Why can’t houses be mass pre-fabricated by machines in factories then assembled by robots on-site? How many jobs would be lost as a result?
  • Services sector – There are many services sector jobs where software and robots could easily be designed and built to relieve such workers from their daily tasks. Accounting could be streamlined such that all business and personal finances are managed by software completely, with robots now aiding in surgery why can’t robots actually perform the surgery or give a massage, or pull a tooth? Why are there so many public servants dealing with questions and answers and data-entry when we have technology such as that found in WATSON able to take over such tasks? Even many general practitioners are resisting the power available for self-diagnosis – do you think they’ll fund the further development of such tools?
  • Mining – Is as crude as grain farming and could easily be further automated, making thousands and thousands redundant in mines, and even those surveying future mining sites.
  • Education – How important is it to have children learn as much as possible while they’re young (beyond simple skills such as reading, writing and arithmetic), when the whole world could be run by software and robots? When complicated questions can be answered by a computer instead of a professor? Why lock children behind desks for 20 hours a week when they could be out playing?
  • Bureaucracy – With no workers there would be no unions and no union bosses, no minimum wage, no work safety inspector…
  • Military – (Ignoring the ideology of world peace) We already see the success of the UAV, an aircraft which flies autonomously only requiring higher lever command inputs for it’s mission. Why enhance soldiers when you can have robot soldiers? War could even be waged without blood, with the winner having enough fire-power at the end to force the loser to surrender outright (quite ridiculous in reality – I know).
  • Care – There are many employed to look after sick and elderly. Even though the work can be challenging and the pay often low it’s still a job, a job that robots can potentially do instead.
With time such a list could easily be expanded to encompass everyone. Are we all collectively resisting change?
With a world full of robots and software doing everything, what do humans do with 100% unemployment? Do we all dutifully submit our resumes to Robot Inc three times a week? Would we all get on each others nerves? Do we need to work? Would we lose all purpose? Ambition? Dreams?
To best understand how a robot utopia works, just simplify the equation to one person – yourself on an island. You could work everyday of your life to make sure you have enough water, food and shelter or if you arrived on the island with a sufficient compliment of robots you could enjoy being stranded in paradise. Every step in between from doing everything yourself toward doing nothing yourself, sees your level of luxury increasing.
There’s no doubt that the world will be divided into two classes, those that are human and have a holiday everyday, and those that are robots – the working class.
Enhanced by Zemanta

Improve security with compression

Block cipher encryption.
Image via Wikipedia

I have a particular interest in encryption and how to make it stronger.Whilst considering OTP and its vulnerability of reusing a random or psuedorandom stream on plain-text, I was simulating the problem with a puzzle I have come across in the past. (Ever played one of those cryptoquip puzzles in the paper, where one letter is equivelent to another letter? You look at the small words and with trial and error guess words until they make sense across the whole sentance.)

I realised that encryption is significanly affected by the entropy of the input plain-text. As far as I know this is an unproven hypothesis. However it is at least easily verifyable for simple encryption, such as that found in the cryptoquip puzzle. I believe that source entropy losses it’s significance in overal security as the encryption method itself improves. However this may only be because once encryption is significantly strong doubling it would have no perceivable outcome. For example AES is considered one of the strongest, if not the strongest symmetric encrytion algorithm to date. Doubling the trillions and trillions of computing power required to break is not readily perceivable by our minds (and ten digits on our hands).

It is commonly accepted that you should compress before you encrypt, because encryption increases entropy which eliminates the ability for any valuable compression. It should be noted though that compression also increases entropy which in light of this article, is very good for security.

If you want good security you should consider using compression as well. You will have the benefit of an improved cipher as well as shorter messages. Perhaps compression can improve cipher strength enough such that some more computationally efficient ciphers are as strong or stronger than AES.

I hope that one day we will see an encryption scheme which incorporates compression in its design. It may also incorporate some other mechanisms to further increase the entropy of inputted plain-text data. Building a joint compression/encryption algorithm may also yeild performance improvements over seperate coherent compression and encryption steps.

It all sounds promising, but this is not an undertaking which I am experienced enough in to tackle.

Enhanced by Zemanta

Revisiting DIDO Wireless

A wireless icon
Image via Wikipedia

I’ve had some time to think about the DIDO wireless idea, and still think it has a very important part to play in the future – assuming the trial conducted of 10 user nodes is truthful. Before I explore the commercial benefits of this idea, I will first revisit the criticisms as some have merit, and will help scope a realistic business case.

Analysis

Weaknesses

  • One antenna per concurrent node – The trial used 10 antenna for 10 user nodes. Each antenna needs a fixed line or directional wireless backlink – this would imply poor scalability of infrastructure. [Update: This is likely so, but Artemis claim the placement of each antenna can be random – whatever is convienient]
  • Scalability of DIDO – We are told of scaling up to 100s of antenna in a given zone. I question the complexity of the calculations for spatial dependent coherence, I believe the complexity is exponential rather than linear or logarithmic. [Update: Artemis pCell website now claims it scales linearly]
  • Scalability of DIDO controller – Given the interdependence on signals, is the processing parellelisable? If not this also limits the scale of deployment. [Update: Artemis claim it scales linearly]
  • Shannon’s Law not broken – The creators claim breaking the Shannon’s law barrier. This appears to be hyperbole. They are not increasing the spectrum efficiency, rather they are eliminating channel sharing. The performance claims are likely spot on, but invoking “Shannon’s Law” was likely purely undertaken to generate hype. Which is actually needed in the end, to get enough exposure for such a revolutionary concept.

Neutral

Discussion surrounding neutralised claims which may be reignited, but are not considered weaknesses or strengths at this point in time.

  • Backhaul – Even though the antenna appear to require dispersed positioning, I don’t believe that backhaul requirements to the central DIDO controller need to be considered a problem. They could be fixed line or directional wireless (point to point). [Update: This is not really a problem. Fibre is really cheap to lay in the end for backhaul, it’s most expensive for last-mile. Many Telcos have lots of dark fibre, not being used and Artemis is partnering with Telcos, rather than trying to compete with them]
  • DIDO Cloud Data Centre – I take this as marketing hyperbole. Realistically a DIDO system needs a local controller, all other layers above such a system are distractions from the raw technology in question. And as such, the communication links between the local controller and antenna need not be IP transport layer links, but would rather be link layer or even physical layer links.
  • Unlimited number of users – Appears to also be hyperbole, there is no technological explanation for such a sensational claim. We can hope, but not place as Pro until further information is provided. [Update: It does scale linearly, so this is a fair claim when compared to current Cell topology or if pCell was was limited to exponential processing load]
  • Moving User Nodes – Some may claim that a moving node would severely limit the performance of the system. However this pessimistically assumes a central serial CPU based system controls the system (a by-product of Reardens “Data Centre” claims). In reality I believe it’s possible for a sub-system to maintain a matrix of parameters for the main system to encode a given stream of data. And all systems may be optimised with ASIC implementation. Leaving this as a neutral but noteworthy point.
  • Size of Area of Coherence – Some may claim a problem with more than 1 person in an area of coherence, assumed to be around one half wavelength. How many people do you have 16cm away from you (900Mhz)? Ever noticed high density urbanisation in the country? (10-30Mhz for ionosphere reflection – <15M half wavelength) [Update: demonstrations have shown devices as close as 1cm away from each other – frequency may still be a limiting factor of course, but that is a good result]
  • DIDO is MIMO – No it’s very similar, but not the same and is likely inspired by MIMO. Generally MIMO is employed to reduce error, noise, multipath fading. DIDO is used to eliminate channel sharing. Two very different effects. MIMO Precoding creates higher signal power at a given node – this is not DIDO. MIMO Spatial multiplexing requires multiple antenna on both the transmitter and receiver, sending a larger bandwidth channel via several lower bandwidth channels – DIDO nodes only need one antenna – this is not DIDO. MIMO Diversity Coding is what it sounds like, diversifying the same information over different antenna to overcome wireless communication issues – this is not DIDO. [Update: Artemis and the industry and now standardising calling it a C-RAN technology]
  • 1000x Improvement – Would this require 1000 antenna? Is this an advantage given the amount of antenna required? MIMO is noted to choke with higher concurrency of uses. Current MIMO systems with 4 antenna can provide up to 4x improvement – such as in HSPDA+. Is MIMO limited in the order of 10s of antenna? Many many questions… [Update: This is likely so, but Artemis claim the placement of each antenna can be random – whatever is convenient]

Strengths

  • Contention – Once a user is connected to a DIDO channel, there is no contention for the channel and therefore improved latency and bandwidth.
  • Latency – Is a very important metric, perhaps as important as bandwidth. Latency is often a barrier to many innovations. Remember that light propagates through optical fibre at two-thirds the speed of light.
  • Coverage – It seems that DIDO will achieve coverage and field less black spots than what is achievable with even cellular femtocell. Using new whitespace spectrum, rural application of pCell would be very efficient, and if rebounding off the Ionosphere is still feasible, the answer to high speed, high coverage rural internet.
  • Distance – DIDO didn’t enable ionosphere radio communications, but it does make ionosphere high bandwidth data communication possible. Elimination of inter-cell interference and channel sharing make this very workable.
  • Physical Privacy – The area of coherence represents the only physical place the information intended for the user can be received and sent from. There would be potential attacks on this physical characteristic, by placing receivers adjacent to each DIDO antenna, and mathematically coalescing their signals for a given position. Of course encryption can still be layered over the top.
  • Bandwidth – The most obvious, but perhaps not the most important.
  • [New] Backward Compatibility – Works with existing LTE hardware in phones. Works better if using a native pCell modem with better latency performance particularly. Seamless handoff to cell networks, so it can co-operate.
  • [New] Wireless Power – Akbars (See Update below) suggested this technique could be used for very effective Wireless Power, working over much larger distances than current technology. This is huge!

Novel Strength

This strength needed particular attention.

  • Upstream Contention Scheduling – The name of this point can change if I find or hear of a better one. (TODO…)

Real World Problems

Unworkable Internet-Boost Solutions

I remember reading of a breakthrough where MEMS directional wireless was being considered as an internet boost. One would have a traditional internet connection and when downloading a large file or movie, the information would be sufficiently cached in a localised base station (to accommodate a slow backlink or source) and then forwarded to the user as quickly as possible. This burst would greatly improve download times and a single super speed directional system would be enough to service thousands of users given its’ extreme speed and consumers limited need for large transfers. Of course even such a directional solution is limited to line of sight, perhaps it would need to be mounted on a stationary blimp above a city…

Mobile Call Drop-outs

How often do you find yourself calling back someone because your call drops out? Perhaps it doesn’t happen to you often because you’re in a particularly good coverage area, but it does happen to many people all the time. The productivity loss and frustration is a real problem which needs a real solution.

Rural Service

It is very economical to provide high-speed communication to many customers in a small area, however when talking of rural customers the equations are reversed. Satellite communication is the preferred technology of choice, but it is considerably more expensive, is generally a lower bandwidth solution and subject to poor latency.

Real World Applications

The anticipated shortcomings of DIDO technology need not be considered as deal breakers for the technology. The technology still has potential to address real world problems. Primarily we must not forget the importance/dominence of wireless communications.

Application 1: A system could be built such that there may be 10 areas of coherence (or more), and can be used to boost current technology internet connections. One could use a modest speed ADSL2+ service of 5Mbps and easily browse the bulk of internet media {Text, Pictures} and then still download a feature-length movie at gigabit speeds when downloaded. This is a solution for the masses.

Application 2: DIDO allows one spectrum to be shared without contention, but that spectrum need not be a single large allocation of spectrum, it could mean a small (say 512Kbps) but super low latency connection. In a 10 antenna system, with 20Mhz of spectrum and LTE-like efficiency this could mean 6000 concurrent active areas of coherence. So it would enable very good quality mobile communication, with super low latency and practically no black-spots. It would also enable very effective video conferencing. All without cellular borders.

Applications 3 and 4: The same as Applications 1 and 2, but using a long-range ionosphere rural configuration.

Conclusions

We still don’t know too much about DIDO, the inventors have surrounded their idea with much marketing hype. People are entitled to be cautious, our history is littered with many shams and hoaxes, and as it stands the technology appears to have real limitations. But this doesn’t exclude the technology from the possibility of improving communication in the real world. We just need to see Rearden focus on finding a real world market for its’ technology.

UPDATE

  • [2017-01-10] Finally, the hint text has dissappeared completely, to be replaced with
    • “supports a different protocol to each device in the same spectrum concurrently” – following up on their last update
    • “support multiple current and future protocols at once.” – this is a great new insight. They have right up top, that pCell supports 5G, and future standards. So without considering the increased capacity, customers don’t need to keep redeploying new hardware into the field.
    • “In the future the same pWave Minis will also support IoT” – there are standards floating around, and what better way to implement security for IoT, than physically isolated wireless coherence zones, and perhaps very simplistic modulation.
    • “precise 3D positioning” – This confirms one of my predictions, pCell can supercharge the coming autopilot revolution
    • “and wireless power protocols” – as I always suspected. However, it still seems impractical. This is likely just a candy-bar/hype statement.
    • “Or in any band from 600 MHz to 6 GHz” – it’s interesting to learn this specification – the limits of typical operation of pCell. I note they have completely abandoned long-wave spectrum (for now at least).
    • “pWave radios can be deployed wherever cables can be deployed” – I still think fibre/coax is going to be necessary, wireless backhaul is unlikely to be scalable enough.
    • “Typically permit-free” – does this refer to the wireless signal I wonder? Very interesting if so. It could also refer carrier licensing, because you’re only carrying data, information is only deduced back at the data centre.
    • “can be daisy-chained into cables that look just like cable TV cables” (from Whitepaper) – so perhaps long segments of coax are permitted to a base-station, but that base-station would likely require fibre out.
    • “pCell technology is far less expensive to deploy or operate than conventional LTE technology” – they are pivoting away from their higher-capacity message, now trying to compete directly against Ericson, Huawei, and others.
  • [2016-02-25] pCell will unlock ALL spectrum for mobile wireless. No more spectrum reservations. pCell could open up the FULL wireless spectrum for everyone! I hope you can grasp the potential there. Yesterday I read a new section on their website: “pCell isn’t just LTE”. Each pCell can use a different frequency and wireless protocol. This means you can have an emergency communication and internet both using 600Mhz at the same time meters away! In 10 years, I can see the wireless reservations being removed, and we’ll have up to TERABITS per second of bandwidth available per person. I’m glad they thought of it, but this is going to be the most amazing technology revolution of this decade, and will make fibre to the home redundant.
  • [2015-10-03] It’s interesting that you can’t find Hint 1 on the Artemis site, even when looking back in history (Google), in fact the date of 2015-02-19 it reads “Feb 19, 2014 – {Hint 2: a pCell…”, which is strange given my last update date below. Anyway the newest Hint may reveal the surprise:
    • “Massless” – Goes anywhere with ease
    • “Mobile” – outside your home
    • “Self-Powered” – either Wireless Power (unlikely) or to wit that this pCell is like some sort of Sci-Fi vortex that persists without power from the user.
    • “Secure” – good for privacy conscious and/or business/government
    • “Supercomputing Instance” – I think this is the real clue, especially given Perlman’s history with a Cloud Gaming startup previously.
    • My best guesses at this stage in order of likelihood:
      • It’s pCell VR – already found in their documentation, and they just haven’t updated their homepage. VR leverages the positioning information from the pCell VRI (virtual radio instance) to help a VR platform both with orientation as well as rendering.
      • Car Assist – Picks up on “Secure” and the positioning information specified for VR. VR is an application of pCell to a growing market. Driverless is another growing market likely on their radar. Driverless cars have most trouble navigating in built up, busy environments and particularly round abouts. If pCell can help in any way, it’s by adding a extra absolute position information source this cannot be jammed. Of course the car could also gain great internet connectivity too, as well as tracking multiple vehicles centrally for more centralised coordination.
      • Broader thin-client computing, being beyond “just communications”, although one can argue against that – pCell is communications an enabler. This would include business and gaming.
      • Emergency Response. Even without subscription it would be feasible to track non-subscribers location.
  • [2015-02-19] Read this article for some quality analysis of the technology – http://archive.is/ZTRhf [Archive Link] – Old broken link: http://akbars.net/how-steve-perlmans-revolutionary-wireless-technology-works-and-why-its-a-bigger-deal-than-anyone-realizes.html
  • [2015-02-19] Artemis have on their website – “Stay tuned. We’ve only scratched the surface of a new era.…{Hint: pCell technology isn’t limited to just communications}’ – I’m gunning that this will be the Wireless Power which Akbars suggested in his blog article. [Update 2015-10-03 which could be great for electric cars, although efficiency would still be quite low]
  • [2016-06-02] Technical video from CTO of Artemis – https://www.youtube.com/watch?v=2ETMzxkyTv8
    • Better coverage – higher density of access points = less weak or blackspots
    • When there are more antenna than active users, quality may be enhanced
    • Typical internet usage is conducive for minimising number antenna for an area
    • pCell is not Massive MIMO
    • pCell is Multi User Spatial Processing – perhaps MU-MIMO [see Caire’03, Viswanath’03, Yu’04]
    • According to mathematical modelling, densely packed MIMO antenna cause a large radius of coherent volume. Distributed antenna minimises the radius of coherent volume. Which is intuitive.
    • see 4:56 – for a 3D visulasation of 10 coherent volumes [spatial channels with 16 antennas. Antenna are 50m away from users – quite realistic. Targetting 5dB sinr.
    • pCell Data Centre does most of the work – Fibre is pictured arriving at all pCell distribution sites.
    • 1mW power for pCell, compared to 100mW for WiFi. @ 25:20
Enhanced by Zemanta

What the… Payroll tax?

I didn’t really notice the GST debate, except being annoyed at all prices increasing when GST was introduced (I was in High School). It turns out the a major reason for it’s introduction was to eliminate many state taxes. One of these taxes being Payroll tax….

Have a look: http://www.sro.vic.gov.au/sro/SROnav.nsf/LinkView/8AFF7B9FB4EB3733CA2575D20022223D5DB4C6346AF77ABBCA2575D10080B1F7

It turns out that if I employ too many people I will have to pay the state 4.9% tax on all the gross wages paid to my employees – including Superannuation! Not only is this a disincentive to employ, it’s also yet another administrative burden which limits growth. I hear it all the time, that ultimate success requires flexibility and scalability – Payroll tax is an ugly and unnecessary burden.

Sure we can’t just pull such revenue out from under the states, but it can be replaced with revenue from another more efficient tax – such as GST. At just 10% our GST is relatively low compared to other countries, in Europe some countries have a GST or VAT of 25%.

So why not simply increase GST? Consumers, AKA voters are the end-users and effectively the ones who pay the tax. Even though consumers can ultimately pay less in the long run, because the companies no longer need to pay payroll tax, the whole economy changes. Smaller business that didn’t previously pay Payroll tax are effectively charging their customers more, because they cannot discount from regained revenue from a dropped tax. Small changes to the rate over a long time may work best with matched reductions in payroll tax in the states. But in summary GST rate increases are political poison for non-business owning voters.

Another issue is fraud. As GST increases, the returns on VAT fraud become greater. Countries such as Sweden (25%) and the UK (20%) are subjected to simple but hurtful frauds which effectively steal from GST revenue. It basically works by having a fake company be liable to pay GST, and a legitimate company entitled to the return. The fake company goes bankrupt. As the GST rate increases, the amount of payback to such frauds increases, encouraging more incidents. It seems that any macro economic change, either short term (Government Stimulus) or long term (Tax Reform), opens the door for corruption and rorting. If the GST rate is to be increased, the right legislation needs to be in place to prevent such fraud.

So in the end the ultimate way for a business to overcome Payroll tax is to innovate good products which provide a comfortable return and innovate inside the business to improve internal efficiency, reducing the need to hire as many staff resulting in the ability to maintain a competitive edge.

DIDO – Communication history unfolding?

First of all I just want to say – THIS MAY BE HUGE!!

I read this article last night: http://www.gizmodo.com.au/2011/08/dido-tech-from-quicktime-creator-could-revolutionise-wireless-broadband/

In plain english, a company has discovered a way to dramatically improve mobile internet. It will be 5 – 10 years before it’s commercialised, however I believe it will happen sooner, with many realising just how revolutionary it will be, investing more money, attracting more resources to get it done sooner.

I am not a representative of the compnay, but have been involved in understanding and pondering wireless technology, even coming up with faster and more efficient wireless communication concepts, but none as ground-breaking as this one. I don’t claim to know all the details for certain, but having read the whitepaper I beleive I can quite accurately assume many details and future considerations. Anyway I feel it’s important for me to help everyone understand it.

How does it work (Analogy)?

Imagine walking down the street, everything is making noise, cars, people, the wind. It’s noisy, someone in the distance is trying to whisper to you. Suddenly all the noise dissappears, and all you can hear is that person – clearly. This is because someone has adjusted all the sounds around you to cancel out, leaving just that persons voice.

How does it work (Plain English)?

When made available in 5-10 years:

  • Rural users will have speeds as fast as in CBDs, receving signals from antennas as far as 400km away!
  • In cities there will need to be several antennas in within ~60km of you
    • today there are so many required for mobile internet, the number will be reduced…
    • the number of antennas per mobile phone towers and buildings will be reduced to just one.
  • there will be a central “server” performing the mathematical calculations necessary for the system.

The most technical part (let’s break it down):

  1. Unintended interference is bad (just to clarify and contrast)…
  2. DIDO uses intereference, but in a purposeful way
  3. DIDO uses multiple antennas, so that at a particular place (say your house), they interfere with each other in a controlled way, leaving a single channel intended for you.
  4. It’s similar to how this microphone can pick up a single voice in a noisy room – http://www.wired.com/gadgetlab/2010/10/super-microphone-picks-out-single-voice-in-a-crowded-stadium/
    but a little different…

How does it work (Technical)?

I have been interested in two related concepts recently:

  1. Isolating a single sound in a noisy environment – http://www.wired.com/gadgetlab/2010/10/super-microphone-picks-out-single-voice-in-a-crowded-stadium/
  2. I saw an interview with an ex-Australian spy who worked at a top secret facility in Australia in co-operation with the US. The guy was releasing a book revealing what he can. From this facility he spied on radio communications around the world. I wondered how and then figured they likely employ the “super microphone” method.

When I heard about this technology last night, I didn’t have time to look at the whitepaper, but assumed the receivers may have “super microphone” sort of technology. It turns out the inverse (not opposite) is true.

Scenario:

User A’s radio is surrounded by radios from DIDO. The DIDO server calculates what signals need to be generated from the various radios such that when converging on User A, they “interfere” as predicted to leave the required signal. When there are multiple users the mathematical equations take care of working out how to converge the signals. As a result, the wireless signal in the “area of coherence” for the user, is as if the user has the full spectrum 1:1 to an external wireless base station.

Implications for domestic backhaul

There would need to be fibre links to each of the antennas deployed, but beyond that remaining backhaul and dark fibre will rapidly become obsolete. DIDO can reach 400km in the rural mode, bouncing off the ionosphere and still maintaining better latency than LTE at 2-3ms.

Physical Security?

We hear about quantum communication and the impossibility to decipher the messages. I believe a similar concept of physical security can be achieved with DIDO. Effectively DIDO provisions areas of coherency. Areas in 3D space where the signals converge, cancelling out signal information intended for other people. So effectively you only physically receive a single signal on the common spectrum, you can’t physically see anyone else’s data, unless you are physically in the target area of coherency. This however, does not mean such a feature enables guaranteed privacy. By deploying a custom system of additional receivers that can sit outside the perimeter of your own area of coherency, you can sample the raw signals before they converge. Using complex mathematics and empowered with information of the exact location of the DIDO system antennas, one would be theoretically able to single out the individual raw signals from each antenna, and the time of origin and then calculate the converged signal at alternative areas of coherence. This is by no means a unique security threat. Of course one could simply employ encryption over their channel for secrecy.

This doesn’t break Shannon’s law?

As stated in their white paper, people incorrectly apply the law to spectrum rather than channel. Even before DIDO, one could use directional wireless from a central base station and achieve 1:1 channel contention (but that’s difficult to achieve practically). DIDO creates “areas of coherency” where all the receiving antenna picks up is a signal only intended for them.

Better than Australia’s NBN plan

I’ve already seen some people attempt to discredit this idea, and I believe they are both ignorant and too proud to give up their beloved NBN. I have maintained the whole time that wireless technology will exceed the NBN  believers interpretation of Shannon’s law. Remember Shannon’s law is about *channel*, not *spectrum*. DIDO is truly the superior option, gigabit speeds with no digging! And clearly a clear warning that governments should never be trusted with making technology decisions. Because DIDO doesn’t have to deal with channel access, the circuitry for the radios is immensely simplified. The bottleneck will likely be the ADC and DACs, of which NEC has 12bit 3.2Giga-sample devices (http://www.physorg.com/news193941421.html). So multi-terabit and beyond is no major problem as we wait for the electronic components to catch up to the potential of wireless!

CRITICISMS UPDATE:

  • One aspect to beware of is the potential need for 1:1 correlation of antennas from the base station and users. I can’t find any literature yet which either confirms or denies such a fixed correlation. But the tests for DIDO used 10 users and 10 antennas.
  • If there must be one antenna per user this idea isn’t as earth shattering as I would hope. However there would still be relevance. 1) It still achieves 100% spectrum reuse, 2) all the while avoiding the pitfalls of centralised directional systems with beam-forming where obstacles are an issue. 3) Not to mention the ability to leverage the ionosphere for rural applications – very enabling.
  • After reading the patent (2007) – I see no mention of the relationship between AP antennas and the number of users. However I did see that there is a practical limit of ~1000 antennas per AP. It should be noted that if this system does require one antenna per user, it would still be very useful as a boost system. That is, everyone has an LTE 4G link and then when downloading a video, get the bulkier data streamed very quickly via DIDO. (The amount of concurrent DIDO connections being limited by the number of AP antennas)
  • The basis for “interference nulling” discussed in 2003 by Agustin et al. http://citeseerx.ist.psu.edu/viewdoc/summary?doi=10.1.1.10.2535
  • Removed many  ! at the top, to symbolise the potential for disappointment.
  • Hey there’s a spell check!
  • Have a look here for whirlpool discussion: http://forums.whirlpool.net.au/forum-replies.cfm?t=1747566

Memorable IPv6 Addresses

Back in Nov 2009, I foresaw that IPv6 addresses would become a menace to memorise. So I had a crack at improving memorability of such addresses, See http://blog.jgc.org/2011/07/pronounceable-ipv6-addresses-wpa2-psk.html?m=1. The basic idea is that sounds which make up words or resemble words are much easier to remember than individual digits. I was actually thinking about this idea last night, how it could be applied to remembering strong passwords.

This morning I got an email from a collegue who pointed out this: http://www.halfbakery.com/idea/IPv6_20Worded_20Addresses#1260513928. I don’t believe the scheme used here is as memorable as mine, but it sounds like other people are having similar ideas.

Back to my thoughts last night on more memorable passwords. We know we’re supposed to use Upper and Lower case, special symbols etc. But even then you’re not using the full 64bit capacity of the full 8 character recommended string. To use my scheme to memorise more secure passwords, you would simply use my tool.

I made a video 🙂

[youtube=http://www.youtube.com/watch?v=f60GGxPskG4]

Phishing Drill – Find your gullible users

Do you remember participating in fire drills in school? I remember them fondly – less school work for the day. I also remember earthquake drills when I went to school in Vancouver for a year. So what to drills do? They educate us about the signs and signals to look out for, and then how to react. I believe spam filters work fairly well (that was a sudden change of subject). I use gmail and spam detection is built-in, however I still do receive the occasional spam message. Education of those who fall for spam and phishing is an important factor in reducing associated problems and scams. If all internet users had their wits about them, we could put spammers and phishers out of the business – and most door to door salesmen. So how do we achieve this without million dollar advertising campaigns?…. Drills. Spam/Phishing Drills, or to be more generic, perhaps Internet Gullability Drills (IGD – everyone loves an initialism).

How do you drill the whole of the Internet? “Attention Internet, we will be running a drill at 13:00 UTC”…. probably definitely not. My proposed method involves every web application, which liaises with their customers by email or is at risk of being spoofed in a phishing scam, to have their own private drills. Such a drill would involve sending out an email message which resembles a real life phishing/spam email. Each time different variables could be used – email structure, sender email, recipients name, a direct link to a spoof site. In any case the drill should be able to detect those who fall for the drill. They can then be notified of their stupidity in the matter in a more delicate way than most would – “Haha – you just fell for our IGD you loser!”, is way off.

Ultimately a Gullability prevention centre website would exist which the users could be referred to, so they may refresh themselves in current threats, how to identify them and how to react. Quite a simple solution, and maybe I’m not the first one to think about it, I didn’t bother searching the Internet for a similar idea…

 

Creativity. Just Pulleys and Levers.

Growing up as a kid, I was captivated by magic tricks and wanted to know how they were done. Pulling a rabbit out of a hat, the slight of hand, the magnets, the hidden cavity. They would have you believe that they were achieving something beyond the physical laws, that they had a supernatural power. TV shows and literature thrive in unveiling the surprising simple process of even the most elaborate illusions.

Creativity is the last remaining magic trick.

Western culture goes to great lengths to idolize and mystify it. “It’s a gift”, “It’s a talent”, “They must use the right side of their brain”. Paintings and artworks are highly prized, some running into the millions of dollars. The creative process in the mind seems elusive and magic. Society seems to think that creativity is only for a select few. The fanfare and mystique of creativity, adds to the performance.

They’re wrong.

Creativity is a simple process of random noise and judgement, two very tangible logical concepts. It’s a process. Like a magician’s rabbit in a hat. This doesn’t take away from the impact of the product of creativity, but it does dispell the super human status of the skill.

Small Things

Creativity doesn’t just happen once in an artwork, it happens multiple times at different levels, in small amounts, but always with the same components of random noise and judgment.

A painter may start with a blank canvas and no idea of what they will paint. They then recall memories, images and emotions which all feed as both random noise and experience for judgement. They then choose a scene, the first round of creativity has occurred.

The painter will not recall perfectly all the details of the scene, but will have to choose how the scene would be composed. In their mind they imagine the horizon, the trees, perhaps a rock, or a stream, each time picturing in their minds different locations and shapes and judging aesthetic suitability. Another round of creativity has occurred, with many more elements of creation. Once painting with a brush in their hand, a painter may think ahead of the texture of the rock, the direction of the stream, the type of tree, the angle and amount of branches, the amount of leaves, and the colours.

Editing

They may stand back and look at what they have painted and decided to change an element. In this case, their single painting is one possibility of randomization and they have judged it to be substandard. They then picture other random forms and corrections and judge the most appropriate course of action.

That whole process is the sum of smaller decisions, with good judgement and a flow of random ideas.

Small things everywhere

This is transferable to music composing. Instead of visualising, like the painter, they play different melodies in their mind. Many musicians fluke a new melody. They make a mistake on their instrument or purposefully allow themselves to play random notes. With judgement, they select appropriate phrases.

It also works for the lyrics for a song. Lyricists, have a sea of words and language moving through their mind, and often randomise. How many words go through your head when you’re trying to find a word that rhymes? With good judgement and some planning the final set of lyrics, can inspire. But there are plenty of draft pieces of paper in the bin.

The end products from creativity can be very impressive, but an artist won’t discount their work as being merely time and small things. There is one exception though. Vincent Van Gogh famously said, “Great things are done by a series of small things brought together”.

Design vs Performance

At this point, it’s very important to comprehend two components of art. Design and Performance. Once a painting has been designed it’s easy to reproduce – or perform. Now, the painter may have refined their design through performance, however they are left with a blueprint at the end for reproduction. Music is constructed in the same way, and is easily reproduced by many musicians. Lyrics can be recited or sung to music by a performer.

So what part of art is actually creative? Often the performance is almost a robotic function. Jazz is combines performance and design at the same time. It’s the design, the improvisation that supplies the creative credential. Design is the crucial creative element. A painter creating the correct strokes on a canvas is simply a well practiced performance.

Random is inspiration

Randomisation can be, and is most often external. Anything we can receive at a low level through our five senses and at a higher level through those senses, such as emotion. An executive is often presented with several options, and uses judgement to select the most appropriate. They are not producing a painting or song, however their process is still creativity – to society a rather boring form of creativity. Software development is considered a very logical process, however the end product is legally considered copyrighted literature. How could something so logical be attributed a magic like status? This always conflicted in my mind before, however understanding creativity as noise and judgement in design and performance cycles, helped to rationalise creativity back to the mortal domain, and consequently allow myself to understand why software design is art.

Conclusion

I expect any artist who reads this article, to be beside themselves – “software isn’t art!”. But it’s the same as uncovering the secret of a magicians trick. Artists are rightly protecting their trade secret, which doesn’t bother me. I like the occasional magic show.

PS.

An expanded creativity formula:

R = Randomisation
J = Judgement
C = Creativity

C = J(R) – “Creativity is a function of Judgement of Randomisation”, as described above.

A break down of the formula’s components – and further insight of my perceptions of the lower level concepts – (more for myself to map it out)

E = Experience
A = Article to be judged – Perceptions though senses and feelings

J = F(A,E,K) – Judgement is a function of Knowledge, Experience against an Article to be judged

M = Memory
SFJ = Senses and Feelings and Past Judgement

E = M(SFJ) – Experience is a class of Memory, that of senses, feelings and past judgement

 

Can we centralise auto-cooling/heating?

Imagine in 10 years, you stop your car (electric), a robot system swaps your depleted battery for a new one and also swaps an LN2 (Liquid Nitrogen) canister. So why would we want a cold canister of LN2? Why not just turn on the old A/C? This idea comes as an opportunity as result of possible changes. 1) Emergence of electric cars, 2) the resultant reduction in parts, 3) The need to conserve energy, 4) Regular stops at a fuel depot.

Even today, we all stop at a servo to pick up fuel, and can potentially implement such as system. An A/C compressor in our car cannot be considered to be as efficient as a domestic reverse cycle A/C unit or an industrial heat pump. So if there was a system to distribute “COLD” and “HOT” rather than making it on the drive, we would save resources and  money. But with today’s system, we already have an excess of heat from the combustion engine.

It’s undeniable that we’ll be moving away from OIL based fuels toward a more efficient electric system. We’re running out of OIL and electricity is so much easier to transmit. Let’s not start on the reasons why Hydrogen is a bad idea – let’s just have one energy transmission system – electricity.

As I’ve said in the past, when you convert to an electric car, you don’t need a radiator, clutch, gearbox, gaskets, timing chains, conrods, transmission fluid, engine oil, large brakes, flywheel, sump, pollution gear, filters, heavy engine block, fuel pump, spark plugs, fuel injectors, valves, …….. Just batteries, a regulator and two or more electric motors. That is of course until you want to stay cool in summer – in an electric car you need a separate motor to run the A/C compressor. Or warm in winter – you now need an electric radiator. Both of which are extra expensive components which drain precious battery power.

My ideal vision for an electric car, is not one that uses dorky inductive paddles to recharge your batteries – that’s so 1900s. The best idea I have seen for this is to battery swap. You can be in and out like an F1 pitstop! I guess fuel stores want you to consider their amazing multi-buy deals – but they’ll have to find a way for you to order from the car 🙂 Of course, you can still recharge at home, but home-charge will never be scalable. As batteries get more efficient and people can store more energy and therefore have faster cars, they’re going to need to recharge faster – something that a household single phase is going to struggle with in the future. Wiring up service depots with more power (and maybe a token solar panel or two) is more likely.

Such a power depot will also be able to operate a small LN2 producing heat pump. It only needs to be run (in Geelong anyway) during summer. As I said at the start, when the futuristic robot changes your batteries, for about $2 it also changes an LN2 canister or for heat a molten salt container? Maybe the hot side of the heat pump can be used to warm up pies 🙂

You would have a simple cat-sized (I panicked)  radiator which would take the LN2 (or heat source) and apply the desired amount of thermal transfer. It would actually cool better than an A/C too. By the way, storing “COLD” in LN2 takes up much less room and weight than storing the energy in a battery to run a small A/C unit in the car. Also, you won’t need to re-gas your A/C every whenever saving you thousands. Where I live, it only gets too hot in summer – so why pay thousands for an A/C unit in your car that you’ll only use a few times a year?

Let me know what you think.

http://www.costhelper.com/cost/cars/car-air-conditioning.html

Redundant Trucks – Vital Rail

It’s amazing. Just a few days ago I was driving home, thinking that they never have adverts for freight trains – I just saw one, QT transport or something. What a co-incidence! Or maybe Unilever just bought them out. Of course there is a reason I noticed the absence of the ads. As you can probably tell from the rest of my blog, my mind often drifts into various topics and transport is one of those that I have been contemplating. With the dangers of road trains, efficiencies of rail, increasing road congestion and talk surrounding the environment (let’s focus on the pollution for a change and not just carbon), it’s a wonder that there is no tactical push to consolidate to rail.

Of course we can’t have rail going down every street – that’s where trucks come in. However, Australia has an extensive rail network which is perfect for intercity and interstate transport (I acknowledge that sea freight is also more applicable for interstate as well for larger items). So why do we use trucks in such situations? Why are truck drivers sent on long continuous hours behind the wheel  away from their families, when there’s an alternative? These are all questions which I’m sure have a wide spectrum of answers, ranging from monopolies in the truck logistics medium and perceived convenience for customers to government regulation and leadership in these matters. Knowing the answers to these questions will help us fix the problem.

Lets consider one reason being the familiarity of the truck medium. You see trucks everywhere, you share the road with them, and understand how they operate – they’re like your car but bigger. They’re always clean, new looking full of colour and they advertise their company. On the other hand, you never really see trains, of course that’s because that’s because rail and roads are rarely laid side by side. The ones you do see are old and it’s difficult for the general public and smaller business to appreciate how they operate. People think of the constraints of a trains timetable, unfamiliarity and unacceptability with loading and of course they don’t come to your door (well the non-magic ones that is). So there is generally an inaccessibility of trains, especially to the SMEs, and this is something which could be addressed with integration of rail companies with truck companies – truck deliveries to train loading points.

I’ll conclude. Clearly there also need to be financial benefits for the customer. Surely rail is the more economical option, but with the more familiar truck industry taking away a bulk of business away, economies of scale cannot be successfully achieved. In this regard, it would require Government incentives and support to move a State/Nation toward rail and then reap the savings.

Pass the Red Tape

I’m running a business and employ a few people. I use spreadsheets to manage the finances and came to the conclusion that I needed to buy accounting software of some sort, if I am to have any chance of expanding. I bought a license to Quick Books Online which is $250 a year, and although it looks like it would be really good software when it’s configured, getting there has been an uphill battle. My issue isn’t with the software though, it’s with the poorly communicated legislation surrounding employment which gets me, and the lack of business level standards. I don’t want to under-allocate or over-allocate the amount of money I need to keep aside for employee entitlements. Under-allocation means I could be in for a nasty surprise, over-allocation means I was unnecessarily saving funds which could have been used for additional cash-flow and growth.

I’ve looked on both the Victorian Business and FairWork website, which explains the various rules surrounding Annual Leave, Personal Leave {Which contains a couple of subsets of leave}, Public Holidays, Jury Duty and Super. For starters, I couldn’t find a mention on any of the websites about Super. I only heard that it was 9% and I think a UK tax website mentioned that the Australian Super rate was 9% (something obscure like that). So what’s with that?

All of the leave descriptions are quite easy to understand, from an HR point of view. But when it comes to accounting, you have to deal with a mix of variables in weeks and days. For example an employee is entitled to a min. of 4 weeks of annual leave over 12 months worked. What does a week mean? Does that mean a working week – like 35hours for a person who works 7 hours a day, 5 days a week? Or do they mean 7 days? I have assumed the 35 hours. With an unspoken rule like that, we now can convert “weeks” into a common measurement: “hours”.

In QuickBooks you can input the amount of Annual Leave to credit an employee a fraction of an hour for every hour they work. Now if you use this method, how do you determine this fraction, when ordinary hours may be less in one year because they have claimed Annual Leave as time off? This is just one of the underlying complexities of this system. So what fraction should go in there? I’ve seen a few different figures in forums and those posts are from 2004. If QuickBooks is sold as an Australian product shouldn’t the vendors have a yearly standard posted on their website for such figures? Well yes, they should, but they shouldn’t be the ones to digest government information into concrete standards, the government should do that.

One final point worth exploring is the amount of money a business should set aside for an employee’s entitlements. Leave is measured in hours and is paid at the rate of pay, at the time it’s claimed. So what % of interest should be place on an employees “leave account”? What is the relationship between a payrise percentage ? What is the probability of an employee taking bereavement leave? How much should be set aside per employee? How much can this discounted as more employees are employed (due to progression toward an internal insurance like system)?

I believe that these Government websites should go further in clarifying issues such as this. For starters,

  • the Annual Leave entitlements should simply be defined as a relationship to an hour worked, not a year worked, then pro-rated. For every hour of hard work an employee puts in, they should then be entitled to a percentage of hours in time off – simple as that!

Failing that,  they should include simple dot point rules for an accounting system. It would look something like this:

  • A full time employee must be awarded (Regardless of whether they are full or part-time)
    • X% of an hour for every hour of work they perform for annual leave
      • This account can be in debt by up to 2 weeks, before unpaid leave can be forced
    • X% of an hour for every hour of work they perform for personal leave
      • This account can be in debt by up to 5 days, before unpaid leave can be forced
    • X% of an hour for every hour of work they perform for public holidays
      • An employee must be able to take every public holiday paid, regardless of account balance
  • The amount of money (M) set aside for leave should be equal to or greater than:
    • All of Leave = L
    • Current Wage Rate = W
    • Estimated Wage Increase Rate = I  (by default is X%)
    • M = L * W
    • On each anniversary of employment: M = M + (M * I)
  • OR {Alternative accounting strategies….}
  • All of the different X% come from: {Flow diagrams and formulas}

Such a well structured specification could be used across Australia, and deliver much better efficiency in the economy. Not to mention the savings, stopping every vendor and business person from having to “re-interpret” the HR rules over and over again – getting different results.

Stalked by Unilever

Is it just me? Am I the only the one that can see them?

It started about 5 years ago, when I noticed the logo on a Linx deodorant. Only at that stage it was more like “Ah! So Unilever make deoderant.. huh! Isn’t that something”.

Fast forward to today. You can’t get through an Ad break without a creepy Unilever logo slowly sliding out the top of the screen. They don’t put it full screen, no they are just slip it in there and when I screem pointing to the screen it ghosts off the screen. I find myself looking over my shoulder. My doors are always locked.

What a strange marketing strategy – what are they expecting me to think. “Oh wow look that car company is owned by Unilever, that’ll be a good product, hey look so is that Airline, that Corner Store and that countries government”. It’s creepy and I feel like i’m being stalked.

UPDATE: http://www.engadget.com/2010/08/02/brazilian-laundry-soap-comes-with-a-gps-surprise/

IPTV – How to conquer the livingroom

It’s embarrassing watching the video entertainment products coming out at the moment. They’re all trying to come up with the winning combination, and no one is succeeding – even Apple failed with their Apple TV product. The problem is that their trying to invent some expensive lounge room swiss army knife, when what customers need is simplicity. They are failing to see the primary barrier – no one has IP enabled TVs.

Here’s my forumula to conquer the livingroom:

  1. All new TVs should be IPTV enabled with a gigabit ethernet port – this may include an OnScreen display to surf the web etc., but basically it should support “Push IPTV”
  2. IPTV Adaptor – Develop a low cost IPTV to TV device – which simply supports “Push IPTV”. Eg. Converts Packets into an HDMI signal.
    • I want a company to develop an ASIC
    • It accepts converts streamed video content (of the popular formats)
    • The chip supports outputs into HDMI,  Component, S-Video or Composite
    • The chip is implemented into 4 different products: IP-HDMI, IP-Component,IP-S-Video, IP-Composite

With that barrier clear, you don’t need to fork out to buy another gadget for you living room, you simply leverage your PC or laptop, pushing streaming video to any display in your home. When you connect your IPTV Adaptor to the network, it announces its self and all media devices and media software can then push streaming video to that display.

So now you can use your Laptop / iPad as a remote. You drag your show onto your lounge room and away you go! While everyone is watching on the TV, you can see thumbnail previews of other IPTV shows currently showing – so your channel surfing doesn’t annoy everyone else 🙂

The Web Security Emergency

We responsible users of the internet have always been wary when surfing the Web. We know that we need to make sure websites use TLS security, we need to see HTTPS and a tick next to the certificate to ensure no one is eaves dropping on information being transmitted.

How wrong we are.

The security industry has long known the weakness of RSA and ECC  – the major cryptography used on the internet –  as well as other asymmetric cryptography algorithms, against a quantum computer. And they have done little, to prepare for the advent of the first quantum computer, because it has always been a futuristic dream. But this position is quickly becoming antiquated, there have been many developments in the last few years which now have scientists projecting the first quantum computer to arrive within 5 years. 5 years isn’t that far away when you consider that your sensitive data could be being recorded by anyone today or even in the past, with a hope to decrypt it in 5 years!

There are people who think that Quantum computers will never come, but they are just burying their heads in the sand. Researchers have already developed one which implements Shor’s algorithm – the one which breaks RSA and ECC – on a chip!

So what is the security industry doing about it now? The threat won’t arrive in 5 years, the internet is insecure today. People are carrying out bank transactions today, believing that the data being transmitted will never be read by an unauthorized third party. Programs and drivers are signed with algorithms which will be broken in 5 years, what will stop malware then? There are also anonymous systems such as Tor and I2P which likely use RSA as the basis for their security, in 5 years how many citizens in politically oppressed countries will get the death penalty?

Fortunately there are asymmetric cryptography algorithms which are not known to be breakable by quantum computers, but these have not been standardised or fully researched yet. These can be found at http://spectrum.ieee.org/computing/software/cryptographers-take-on-quantum-computers. So what it comes down to is, that the security industry doesn’t have the answer, and that’s the reason they are not telling anyone of the problem, they’re effectively covering up the truth.

UPDATE:

I’ve seen a lot of rapid developments recently, I’m still optimistic about an RSA breaking quantum computer within 5 years (from June 3, 2010)

http://arstechnica.com/science/news/2012/04/doped-diamond-sends-single-photons-flying.ars

UPDATE:

The commercially available D-Wave (Quantum Annealling) can factorise numbers, according to some of their marketing, and this stackexchange question. The StackExchange question also describes the currently perceived limits of D-Wave or Quantum Annealling in general, estimating that N^2 qubits are required for an N bit prime. The current DWave is only 512 bits.

If the amount of bits were to double annually, then 1024 bit SSL encryption would potentially be easily cracked by such a device in 11 years.

However, this is what is commercially available. Given enough money it would be conceivable that a Goverment / Military could possess one now. Maybe even the NSA.

UPDATE:

D-Wave cannot break today’s SSL web encryption:

The optimizer they now claim to have is restricted to problems that can be mapped to an Ising model—in other words, the computer is not universal. (This precludes Shor’s algorithm, which factors integers on a quantum computer.)

http://arstechnica.com/science/2013/08/d-waves-black-box-starts-to-open-up/2/

UPDATE

I’ve got less than a year left on my 5 year prediction, but I have finally found a scientist themselves make a prediction, it would not be unreasonable to think US DoD could have this already, or within a year, but it would be most practical to simply say I was possibly out by 5 years. So effectively the warning starts today!

They hold out the possibility of a quantum computer being built in the next five to 15 years.

see http://www.abc.net.au/pm/content/2014/s4105988.htm

UPDATE [2015-09-30]:

Even the NSA are worried about the post-quantum computing world, see: http://hackaday.com/2015/09/29/quantum-computing-kills-encryption/

UPDATE [2015-10-14]:

Maybe my prediction was right (only out by 4 months): http://www.engineering.unsw.edu.au/news/quantum-computing-first-two-qubit-logic-gate-in-silicon

Apparently it is feasible to build a quantum computer today. One that can defeat all encryption used in internet communication today (as long as that data is wire tapped and stored). Although it may take 5 years for mass scale commercialization, I’m sure NSA, FBI and DOD of the USA would be capable of building a quantum computer now, if they didn’t already have one.

The breakthrough by UNSW, could very well have been discovered earlier in secret. So this has implications for international espionage today, broader law enforcement in years, and the whole underpinning of the internet security in 5 years.

Using WiFi and searching Google via HTTPS? In 5 years, the owner of the Access Point could very likely decrypt your searches, and other information including bank passwords.

The only secure encryption today requires a password to be entered on each end of the communication channel.

Further Reading

http://en.wikipedia.org/wiki/Quantum_computer

http://www.newscientist.com/article/dn17736-codebreaking-quantum-algorithm-run-on-a-silicon-chip.html

http://www.itnews.com.au/News/213800,toshiba-invention-brings-quantum-computing-closer.aspx

http://www.nature.com/nature/journal/v460/n7252/pdf/nature08121.pdf

http://spectrum.ieee.org/computing/software/cryptographers-take-on-quantum-computers

Super city: Pushing the technology boundaries

In the last article I discussed the concept of Technology Development Zones. This concept can be taken all the way with what we can call a super city. I started with this idea after thinking, what could I do with $1bn. After finishing with dreams of a house on the moon or a medieval castle in the mountains, I started jotting down some points.

Why can’t we start building an entirely new, entirely futuristic city? When you start from scratch, you can benefit from having no boundaries.

Australia so happens to be the perfect place for such an idea. A good economy. A housing shortage.

The Detail

I’ll try to keep it short

  • The city is a sky scraper – providing spectacular views for all residents. ie. 500m high, 500m wide, 40m deep, accommodating a little less than 50,000 people.
    • This reduces the human footprint, with all services contained within a single building. The only reason for people to leave the building is for recreation and farming.
  • It’s located at least 300km from Melbourne – reducing city sprawl
  • But it’ll only take you 30mins to travel 300km in any direction – see Transport below
  • Implements a “Base Luxury Standard”. A body corporate scheme, to operate on economies of scale.
    • Logistics – Cater for all logistics problems in one solution – Let’s call it a Transporter
      • A 3D “elevator” system
      • Elevator capsules which can carry up to 10 people and a few tonne
      • Can travel up/down, left/right, and back/forth
      • EG. Move from the first floor at the front of the building in the middle laterally, to the top floor at the back of the building on the left without “changing elevators”
      • Transporter capsules travel laterally along what would normally be the hallway for walking to your apartment
        • When travelling laterally to an apartment, the transporter doors and apartment doors open together
        • In an emergency, the apartment doors can be manually opened and occupants can walk down the lateral transporter shaft
          • Manual overrides are detected by the system and transporters for the entire floor are speed reduced and obstacle detection is activated to avoid collision with people.
      • Keep in mind that in an emergency, transporters should still be operational laterally, as there is no danger of dropping.
      • Transporters are not just used to transport people but also:
        • Food – Washable containers, transport prepared food, cutlery etc.. from kitchens, used containers are returned to be washed.
        • Heating / Cooling – Heat bricks or molten salts and LN2 packs for refrigeration, air conditioning and heating
          • No pipes = less cost, no maintenance
        • Water – A set of dedicated water transporters are used to fill small reservoir in each apartment
          • No pipes = less cost, no maintenance
          • Bathroom and commercial facilities do have pipes
        • General Deliveries – Furniture, clothing, presents, mail, dirty/washed clothes etc…
        • [Not Data] – That’s fixed line or radio wireless, can’t just transport hard disks, latency is much too slow 🙂
    • Food (Diet) – Set base cost for food every week which is pooled and food providers are then paid for. To start off with, fully automated systems are desirable to peel, slice, etc.., it’s possible to have a fully automated catering system which deals with 80% of meals. The final 20% is catered for by Chefs who still use machines for preprocessing – and are an additional cost. Eg. $5 / person per day for any basic meal and additional for specialist meals.
    • Climate – Instead of having thousands of small air conditioner compressor inverters in every apartment, have 3 very large and very efficient heat pumps and then efficiently transport the head/cold. Each apartment then has their own fan and climate control system where Liquid Nitrogen and Heat bricks are utilized, a simple refrigerator and freezer also run off the Liquid Nitrogen, removing two more compressors.
    • Data – Fibre runs to each apartment, and then inside is patched to different equipment. A fibre runs to the TV and Ethernet over Power is provisioned and isolated for the apartment so that every appliance and electrical device is controllable. Wireless systems are a feasible alternative.
    • Hygiene – Several banks of showers and toilets on each floor, the transporter takes you to the next available toilet or shower as required. So instead of having a toilet and shower taking up space in each apartment that only gets used 100th of the time in a day, you can be more efficient with a central bank of them. The showers and toilets are self cleaning, with minor cleaning cycles after every use and major clean cycles as required (eg. every half day).
    • Transport – Within the building, the transporter can take you anywhere, but what makes a remote city work well is fast transport to already established city centres. Mono rail is quite expensive and still relatively slow and inefficient when compared to air travel over long distances (about 800Km). There is plenty of scope for new transport ideas:
      • Air evacuated tunnel rail (Super sonic speeds without the risk and fuel of staying aloft)
      • Personal air craft (looking more like aeroplanes and possibly launched by ground based launcher, not those ridiculous artist impressions of cars with 4 loud, fuel guzzling turbine engines)
      • Automated Electronic Vehicle transport
      • Community car pool (basically like small automated buses which only travel along a particular route or highway)
    • Menial Tasks – Clothes/Dish washing is fully centralized and automated. Less tedious work for residents means more time to live – a higher quality of life.
    • Shelter – No one truly owns their space, they can either hold (pay around $50,000 for their entire life) or rent (interest of $50,000 over lifetime)

Conclusion

With a Super city, developed countries have an opportunity to push past the so-called “Modern” boundaries of today and exceed peoples expectations with a completely reinvented society and lifestyle. Super cities are not just technology test beds, they also offer citizens cheaper living for a greater quality of life, less stress – freedom from menial tasks, very short waits for transport and short travelling time.

But even developing countries could stand to benefit. The cost effectiveness of super cities and the efficient systems can help pull poor countries out of poverty. And various novelties could be redeployed into existing cities.

Technology Development Zones: Economic Development Zones for developed nations

How long can we say a combustion engine is modern? Or a toaster or microwave or stove or even lounge rooms? We can’t break a lot of traditions or social norms, but there are definately people out there willing to give it a go. I saw a documentary once about the Chinese Economic Development Zone (EDZ), from what I know, they are small geographical areas which are isolated from the macro economy and regulation, which are used to attract investment. China most famously uses such zones to help their economy grow – allowing western investors to leverage cheap Chinese labour but with western business practices. These EDZs are economic hot spots which eventually flow through the greater Chinese economy. The general idea is developing countries need EDZs to industrialise. I propose that such EDZs should never disappear, even in an advanced industrialised nation. An EDZ in a developed economy should have a technology focus rather than economic – so it is a Technology Development Zone (TDZ) and should be harnessed to further technology, processes, social refinement and regulation. Just like in developing countries the main barriers are culture and law.

I consider TDZs to be important for future seeking, “modernised” societies.  Such people can enter TDZs. There is often cultural resistance to change. A TDZ would attract people and families who are excited to consume new technologies and are open to change. A TDZ will help innovators commercialise, selling to a tight, first mover market. People live in a TDZ voluntarily. Residents of a TDZ are co-operative, possibly innovators themselves and should be able to find employment within a TDZ with a wide range of industries.  They are expected to try out new things, answer weekly questionnaires, contribute feedback and embrace change. People outside a TDZ are more likely to accept change if they have seen it in practice, and investors are also more likely to invest in an idea that can be implemented in a co-operative market. It’s quite possible for the progressive social norms of a TDZ to spread outside of a TDZ, and transform a nation to be more conducive to change.

Many amazing technologies could be developed if everyone had access to all IP. Patents aren’t evil, they are necessary to protect inventors so they may extract value from their inventions, blocking out competitors which didn’t have enough foresight. Unfortunately there are cases where patent holders sit on the patent and don’t commercialise it, with the potential consumers being the losers. There are even cases where companies buy out technology just to stop losing their traditional markets. A TDZ could offer a small community immunity from IP laws, offering tremendous innovation opportunities. IP holders would have priority to commercialise their IP within a TDZ, but if another company wants to build a product (say a fridge) which uses another companies IP (eg. Text to Speech) and the IP owner is not building the same product within the TDZ, then there should be no block. As a result all products which are going to be built for the TDZ should be approved by a Product Register, to avoid product overlap and to negotiate IP priority. I don’t consider such IP law exemptions to be mandatory to the success of a TDZ, however they would have significant benefits.

I have seen evidence where highly competitive markets can detract innovation. The latest craze – eg. iphone – although innovative is already successful in the regular market place and can dishearten local new innovation. The competitors in the smart phone market are super players such as Apple, Google, RIM and Microsoft. Thankfully Google created an open platform which is starting to reduce the monopolistic iPhone dominance. TDZ managers could help isolate fads from inside a TDZ, freeing up consumption capacity for new innovation. Technologies and products within a TDZ should be limited, where possible, to products and technologies not found outside the TDZ. Residents within a TDZ would never have the luxury of settling with a device such as an iPhone. New devices would supercede old ones. For example, the iPhone would have been expected, then the Google Nexus, then a Microsoft Phone 7 phone, and so on. In trials residents should receive significant discounts for such devices, after all they would also be expected to answer questionnaires quite frequently, and sustain a relatively high consumption of technology.

The electric car is a great example for illustrating the need of a TDZ. In a previous article I discussed the resistence to change from the oil and combustion automotive industries. If a TDZ was set up in a small city, a micro-economy could be tooled to demonstrate a society living with electric cars. From that micro-economy the idea could spread to the rest of a country and then the rest of the world. The changes would be gradual and the industries would be able to foresee the success in the TDZ and adapt for the eventual success in the greater community. Within the TDZ regulations would be different: the government could mandate all EV patents illegitimate and road laws would be relaxed, requiring engineer approval for reasonable vehicles. Consider the benefits, innovators would discover the best frontiers for the technology, such as logistics and cost-effective transport for the housebound elderly. Then the technology could move to be used for mainstream transportation use, where the single occupant of a car can be productive while travelling.

Imagine the super futuristic TDZ. There could be social change almost impossible to introduce today due to safety hysteria. You can redesign infrastructure and experiment with new city layouts. Citizens expect to be able to watch a movie or do some work while their travelling, groceries are automatically ordered and delivered, no one does dishes or cooks their own meals, or irons or washes clothes, Internet speeds are 10s of gigabits per second. Such a revolutionary change can only happen in a captive conductive society where change is embraced.

The most effective TDZ would be a purpose built city. It could be close to a capital city, so initial citizens can find work outside, while the local economy and infrastructure is developing. Such a move would require significant convictions by a politician, and cannot be expected of the first TDZ in a nation. A TDZ in itself could be too progressive for a politician of today to call. IP relaxation could have serious political ramifications, but a successful TDZ may significantly outweigh those risks. In any case, a TDZ is something like an invention that can be scaled up in stages. I live in Geelong. Geelong could be declared a TDZ precinct, this could start a demographic shift, seeing technology “thrill seekers” move to the region. At the same time a new suburb can be planned and developed as a micro-TDZ. Depending on the success of a TDZ precinct, a purpose built TDZ may be politically feasible.

The TDZ may very well play a significant part in our future. Leaving behind most traditions and inhibitions, we can begin to understand how society can better adapt to technology. Aside from the ideals of a more modern world, the economic benefits may shadow even the most optimistic expectations. What are the benefits of technology not merely available, but fully embraced by society?

In 1899, the U.S. Commissioner of Patents was famously quoted saying, “Everything that can be invented has been invented.” We must not let ourselves become accustomed to the status quo, we have a lot to learn.

Update:

http://blogs.news.com.au/heraldsun/andrewbolt/index.php/heraldsun/comments/more_class_war_as_the_government_robs_business_to_pay_bureaucrats/

Looks like my idea has been picked up in some form, too bad the team captain is going to lose the game (botch this, just like everything else)

The Combustion Automotive Industry : Efficiency vs Jobs

I’ve been pondering on the benefits of electric cars. Why don’t we build those? They’re so efficient, they use a tiny fraction of the number of parts of a combustion engine (no: radiator, oil, pistons, valves, injectors, fuel pump, filter, diff., …). But I quickly realised, that apart from Oil companies buying out the patents, the car companies are as much to blame for keeping us in the stone age, and more so governments.

You see, it’s all of those parts which keep people in jobs. The automotive industry is huge. What happens when your car breaks down? You give a mechanic work, who makes you pay for a replacement part (one that’ll just break down in another few thousand Ks). It’s this inefficient system that creates economy. Now what politician in their right mind would fire an entire industry and not have a huge backlash?

So the question comes down to Efficiency vs Jobs. Can the efficiencies of an electric car, pay for the lost jobs? I believe that serious thought needs to be made into how such changes can be made. I believe that such jobs will be antiques in the coming years. We do need to find new jobs for these people, so obviously the change has to be gradual.

But what if a new competitor enters the scene with a shiny electric car. Who could that be you say? How about China. Their government has actually announced that they want the country to make electric cars. And they can do it, their communist – it will happen. Now china’s market weakness is their quality control, but the electric cars strength is it’s reliability, a perfect combination.

Soon we may have no choice, and be forced to abandon our combustion ways and embrace the electric car. My next article will actually touch on a possible way to transform the automotive industry without hurting jobs.

PS: If you’re reading this thinking that Hydrogen cars or technology X cars are going to be the winners, you right – they’re all electric.

Personally though, I think battery swap will be the best combination.

Moores Law – Technology or Economy?

This law has always been puzzling me. You seem to see it everywhere, everywhere someone wants to make themselves look more intelligent – it’s sort of cliche. But then it occurred to me, maybe this law isn’t really much of a law or prediction, but more of a self fulfilling statement.

What I intend to demonstrate, is that this law is something which marketing people are pedaling. I’m not saying the law is fundamentally flawed by any degree, I’m simply pointing out its’ true meaning and other factors operating behind the law and as a result of the law, and its’ misuse in the context of technology.

First of all, Moore’s law states “in which the number of transistors that can be placed inexpensively on an integrated circuit has doubled approximately every two years”. Now the market is quite happy with that, and in fact they have found ways to bend this statement, like by saying that speed doubles every two years.

Now consider the CPU product cycles. They are quite short – around two years. And the market – they like to keep upgrading.  But now consider that Moore’s law implies dependence on two things, transistor count and cost. I’ve seen plenty of articles over the years of new technologies which could obliterate the two year cycle and more than double the transister count. But that’s not the problem isn’t it.

My thoughts are that, CPU manufacturers could build inexpensive CPUs which obliterate the law’s boundaries, but such technology is best saved for when they need it to keep up with the law and specifically a market which is ready to consume it.

Pretend Intel today, released 16 core CPUs, the market would love to fork out the usual premium at the start of the product life cycle, which would then be left to the higher volume lower cost market at the end of the product cycle. But then what would Intel do? Now all of a sudden the consumers don’t need to upgrade for longer, meaning Intel’s next jump won’t be so successful. The “market” from the vendors perspective needs steady increments.

So in summary, CPU manufacturers hide behind Moores law, to make customers buy up regularly. Journalists love to commentate on the latest technology and point out that it’s following Moore’s law. It’s because Moore stated the law that the CPU manufacturers didn’t feel obliged to commercialize all of the IP all at once. And it’s because of Moore’s law that customers don’t expect anything more, they just continue to fork out the money every 5-7 years for their new you-beaut server of the day. So I believe the law is less about technology and more about describing an approximation of a an industries financial capacity for a product cycle.

I guess it’s not so much a conspiracy – the system does work after all. But more of an insight of MHO – words can be powerful.

Co-Operative LTE

Introduction

Australia’s on the way to getting “seperate” LTE networks, by Telstra, Optus and others. Each taking about 20Mhz of spectrum each. Each of these networks will overlap, representing a poor allocation of resources.

Why not build a government or industry body organisation to build a single network which will reach further and be able to provide faster speeds than any of the smaller individuals.

Imagine if a single wholesale network had all 100Mhz of spectrum! You could achieve gigabit speeds and would be very cost effective for customers.

A better picture

Consider the status quo:
+ provider A use 10 towers to cover a town with 95% coverage, 20Mhz of spectrum and up to 300Mbps at a cost of $2bn, and;
+ provider B use 10 towers to cover the same town with 95% coverage a seperate 20Mhz slice and up to 300Mbps at a cost of $2bn, and;
+ provider C use 10 towers to cover the same town with 95% coverage, another 20Mhz of spectrum and up to 300Mbps at a cost of $2bn.

In total you will have 30 towers, 95% coverage, top speeds of 300Mbps and a total cost of $6bn. The customers are the losers here.

If instead a wholesaler uses 15 towers to cover a town with 99% coverage, 60Mhz of spectrum and up to 900Mbps of speed at a cost of $3bn. Then there is:
+ a big saving of $3bn dollars.
+ lower cost for service
+ an improvement in coverage
+ an improvement in speed

A possible business plan

Remember this is only #one way# of acheiving this. Let’s call the wholesale company (WholeMobile):
+ Each of the providers (A-C) would invest $1bn
+ Each of the providers would provide information of their currently owned mobile tower sites and current backhaul arrangements WholeMobile.
+ WholeMobile would pay the appropriate providers to upgrade the best sites to LTE (selecting from sites from all three providers)
+ WholeMobile would receive free access to the LTE services and ownership of the LTE components.
+ The tower property could remain property of the providers.
+ Co-located equipment would remain property of the respective owners.
+ The spectrum would be owned by WholeMobile.
+ WholeMobile would pay the relevent providers to upgrade backhaul to the various sites, with all the additional bandwidth belonging to WholeMobile.
++ Where upgrading backhaul does not suffice, new links will be commissioned by WholeMobile and fully owned by WholeMobile.
+ Profit from WholeMobile is saved for future network upgrades. Any excess profit is returned as a profit to investors.
+ The government may make a small investment to help inspire the process.

Operationally:
+ WholeMobile represents the full $3bn investment, with ownership of the use of the LTE network
+ Only a few sites should be required, where coverage is to be improved
+ Rural customers now can have premier coverage and speed (no need for satellite), in fact coverage can potentially be 99.9999% of the population and roads.
+ All providers and retailers regardless of investment pay for access (No need for ACCC to regulate prices).
+ Example of access business model: Use a [per KB weighted model]. Traffic on a congested tower results in penalty accounting. Eg. 1KB from a maxed out tower accounts to 10KB for the customer instead. This ensures customers don’t try to use all 900Mb of bandwidth all the time.

The approaching revolution

By the way 300Mbps * 11.5 is 3.5Gbps not 3.5Tbps (my mistake). But keep in mind that the digital dividend is just the beginning. Take into consideration the 403-520Mhz spectrum, and existing licensed spectrum.

Digital technology has been exploding in the wireless domain {Digital TV, Digital Radio, 4G etc}. It is obvious that digital encoding is much more efficient than analog. If the whole spectrum was re-assessed and applied with mostly digital technology, most government, military, navel and air services could be greatly condensed, leaving huge chunks of spectrum for commercial purposes. We are only seeing the beginning of a wireless revolution.

Cool Down Global Warming

My Position on Global Warming:

Reference information: http://www.geocraft.com/WVFossils/Carboniferous_climate.html

Warming Likely. I have my doubts, and maybe it’s not to the extent they’re making out to be, but it looks quite likely. It appears it’s happened in the past.

Man Cause Unlikely. I’ve seen plenty of evidence that it’s not the cause – especially when you go back millions of years and see there is no hard link between CO2 and temperature. But who cares, even if it is caused by CO2 how much of that is man made? If we eliminated all our CO2 now would it stop the permafrost in Russia from thawing and releasing yet more CO2?

Overall – Not a huge problem. If the Earth avg. temperature is normally much higher, then what are we afraid of? Life has existed through these times before, and if it has happened previously in a natural cycle, who we to stop it? Life has survived in the past. I think we just don’t like change, we don’t want to move our houses.

Solutions – Direct. If we want to solve the problem at all, we should take a direct approach and cool down the Earth.

I’m getting frustrated with the politics, media induced popularity and hype around global warming. Sure it could be happening – but to ram it down our throats and try to tell us, that it’s caused by CO2 is ridiculous. Especially given the scientific facts. But even if it was caused by CO2, according to the alarmists, changing our polluting habits will not make any difference for the next 100 years! So why spend billions of dollars which will have no effect?  To me the answer is simple. If the Earth is warming up, if we attempt anything, cool it down directly.

Let’s look at the current popular equation.

Global Warming > Caused by CO2 > Reduce CO2 > Find technologies to reduce CO2.

My formula is:

Global Warming > Find technologies to cool it down.

If the problem really is that alarming, bypass the CO2, and get a world-wide protocol on such a direct solution.

Every day we are heated by the Sun. In the end it’s the Sun that’s warming the Earth – sure CO2 may be keeping more warmth in, but what we need to do is get rid of the heat. Take a look at this article: http://en.wikipedia.org/wiki/Solar_energy. The Earth is constantly being bombarded by 174 petawatts (PW) of thermal energy. This diagram (http://en.wikipedia.org/wiki/File:Breakdown_of_the_incoming_solar_energy.svg) shows how the energy is absorbed and emitted. 89 PW is absorbed by land and sea and that’s good, our plants bathe in the light – all living organisms (except a few) depend on the light and heat from the Sun, but it seems that at the moment it’s too much.

Idea 1 – Terrestrial Reflectors

The greenies are pleading with us to build solar thermal power plants in deserts – makes sense, but they’re enormously expensive. We can indeed benefit from this free energy, but I suggest we take an intermediary step which will cost less, cool the Earth and prepare us for future expansion of solar thermal power.

And we definitely don’t need that heat in the deserts. The solution…

Mirrors… glorious mirrors… actually solar thermal reflectors – there’s a difference… and lots of them. On average for every sq. m of Earth there is 1KwH of solar energy bombarding the earth – of course the average is higher near the equator and in deserts (where there’s less moisture in the air). Solar Mirrors can (http://en.wikipedia.org/wiki/Solar_mirror) reflect 93% of heat back into space. By reflecting enough heat back through the atmosphere and back into space, less is absorbed in the ground – where the heat is most absorbed into the lower atmosphere. Think about those hot days, when the bitumen is radiating heat, this is what makes it a hot day. If the Sun didn’t shine on the ground and the wind didn’t pick up the heat, you’d be fine. Now of course, we still need light and heat – let’s not go overboard…

The idea is to install and arrange fixed solar mirrors in the desert, but in a constellation best used for future Solar Thermal power generation. For power, one needs to track the sun to keep the light on a central solar tower, but we don’t need that to cool down the earth, just ensure the heat reflects up back into space. So the mirrors would be fixed – no tracking electronics or mechanics required, but they can still be mounted on stands which have the dual axis movement, they’re just locked until you upgrade them for power generation.

With a Solar Thermal Power plant (http://en.wikipedia.org/wiki/Cloncurry_solar_power_station) you want to generate electricity, there’s much more complexity and cost in this endevour. You need the tower, power transmission, tracking mechanics, and all the people to make that happen, then maintenence etc…  A 10MW power plant costs about $31M and consists of about 54 towers, with mirrors covering 60,000 sq. metres (http://www.lloydenergy.com/presentations/Cloncurry%20Solar%20Thermal%20Storage%20Project.pdf)

How much would it cost to just have the mirrors? If the mirrors cost $25 / sq. metre (including installation), it represents $1.5M or 1/30th of the cost of the whole plant. (We’re not including land cost – which shouldn’t be a problem in the desert if it’s a Government backed initiative. Also remember, the mirrors don’t have the tracking mechanics which would cost a lot to install and calibrate). But if you spend a bit more you get electricity – you say? Well let me assure you that, even Nuclear (considered expensive) is cheaper to consume that Solar Thermal. And aren’t we trying to combat global warming?

Now in Equatorial and Australian Deserts the average is more like 2200Kw / M2 (http://www.off-grid.net/2004/07/10/how-much-solar-energy-is-there/). Those 60,000 mirrors would reflect around 132MW of solar energy back into space in an hour during the day (that must mean the solar thermal plant is only converting less than a 10th of the energy?). In a day (8hr peak) would reflect a whopping 1056MW of solar energy – that’s over 1GW! All for $1.5 million dollars.

So by pointing the panels into space, we can actually solve the problem. They want to spend billions to reduce carbon and tax many citizens at the same time. For $100 billion you could have 4 billion sq. meters of reflectors (For solar thermal the reflectors are spaced out – we’ll say one reflector in every 3Mx3M block – x9. This means it would cover ~60x60KM), sending back 88TW of energy per hour or 704TW back into space. That might be enough to tip the global warming scales in the other direction, pretty good considering they’re pouring in all that money which will have no effect (at least for 50-100years).

Also, imagine the environmental implications. The deserts could be cooled and in fact re vegetated… Or if the Earth cools enough, put in Solar Towers (over a longer time, as there’s no rush now) and reap the energy, pointing the mirrors back to the sky if Global Warming becomes a problem again.

Idea 2 – Space Reflectors [Best Option]

Only 89PW of the total 174PW of energy pounding Earth hits the ground. This means the terrestrial reflectors are about half as effective as if they were in space, and the energy still passes through the atmosphere twice, where it can be absorbed.

The idea is to put a sail in between the Earth and the Sun. Initially, I thought that you could just have a huge inflatable object floating in space. But then I remembered the solar winds (and Earth’s gravity – which is more of a problem because the object isn’t orbiting the Earth) – they would blow the object back to Earth. For this to work, you either need to place it closer to the Sun (where the Suns gravity over-powers the winds), or have many smaller stream lined objects – the latter seems more likely. For this to work, you would need to send a rocket up and then break out of Earths orbit, travelling to a place which maximizes the shading objects cover of Earth, while requiring no propulsion to maintain position. Then jettison the objects (likely near-vacuum “ballons”, which expand in space (vacuum)).

There would be no problem regarding having hard shaded spots on the Earth, as the Suns’ rays are not parellel. The objects would practically dim the sun very slightly.

To match Idea 1’s target of 704TW / day reduction = 24.4GW / sec. This would require a sail size of (8x 174*1000*1000ths of 127800490km2 =) 5.8km2 (compared to 4000km2 terrestrially). This size sounds achievable with one launch. If the sail was close to moons orbit (the sail would need to be slightly bigger – as the sun spot size increases as you approach the Sun – right near the sun is the size of the Sun’s cross sectional area), a 47,000KG payload can be delivered (http://en.wikipedia.org/wiki/Payload_(air_and_space_craft)) (perhaps you can go just inbetween the Earth and the Moon – the Moon would attract meteors etc.. preventing damage to the sail and if placed correctly, the moon’s periodical passing may pull the sail back away from the Earth, enough to counter the winds?). So a sail of 5.8km2 would need to weigh less than 47T. The sail would have to weigh 8g / m2 – (so you’d probably need a bigger rocket than the Saturn V). Not very comforting (@ 6.35 µm foil weighs 17.2g/m2). So let’s use multiple launches to have 100g / m2 material – therefore 13 launches @$150million ea. Therefore in total, with a cost of $1 / m2 for the material + launches, it would cost: $5.8Million (materials) + $1.95bn = ~ $2bn. (That’s compared to $100bn for terrestrial solution).

Now with the same $100bn you could have 50x the coverage = 1.2TW / sec (and because it’s in space you have 24hr effectiveness, so it’s 4 times as effective as the terrestrial idea = 29.3PW / day!). I would think very effective for the purposes of reversing Global Warming, in fact I think this is dangerous – you’re at risk of creating an Ice Age!

Costs (Comparison of Idea1 and Idea2 with 1PW reduction):

Terrestrial: To stop 1PW of energy hitting the Earth’s surface, it would cost about $573,032 trillion @ $25 / m2 for reflectors. (I’m not saying you need to stop 1PW – but this is good for comparison).

Terrestrial Workings: Surface Area of Earth = 510000000km2. Sun affect = 1/2 of Earth = 255000000000000m2. 1 / 89th of 1/2 earth’s surface = 5,730,337km2. @$25 / m2 = $143258Trillion. Must x4 because the mirrors are only effective 1/4 of the day(all in one spot) or 1/4 as effective all the time (spread over the earth – near equator). So, $573,032 Trillion.

Space Sail: To stop 1PW of energy hitting the Earth (remember you would need 1/4 less material because it’s always in effect – unlike the terrestrial solution (8hrs / day)), it would cost about $800bn. So if the sail was thin enough and you could launch it in one rocket, stopping 1PW of energy is feasible – and overkill i’m sure. But certainly looks like the best option, with extra thought needed into the sail material (thinner is better – lighter – cheaper)

Space Sail Workings: Diameter of Earth = 12 756.2km . Sun spot on Earth = 127800490 km2. To reduce 174PW by 1PW, you would need to block 1/174 of the Sun Spot = 734,485km2 of blocking (compare that to 5,730,337km2 of reflectors required on Earth (7.8x), plus in Space they don’t necessarily need to reflect – space is cold – they can absorb and cool, and the sail can potentially be one big thin sail (with holes to prevent it moving)). Because you’re building one huge sail, and only “install” once, you can get a much lower per-m2 price + install costs. (We will put aside the fact that for this scale you would need thousands of launches – that doesn’t scale well – so let’s pretend it only needs one massive launch ~ $65bn special purpose delivery). At $1 / m2 it would cost $734,485,000,000 or $735bn + $65bn (with one super launch). If 5.8km2 requires 13 Saturn V launches, then this requires 126,635 launches = ~ $19,000 trillion.

Continue reading “Cool Down Global Warming” »