Lets leave Javascript behind

Disclaimer: I am sure Javascript will continue to be supported, and continue even to progress in features and support, regardless of any Managed Web. Some people simply love it, with all the flaws and pitfalls, like a sweet elderly married couple holding onto life for each other.

It’s great what the web industry is doing with ECMAScript, from version 6 we will finally see something resembling classes and modules. But isn’t that something the software industry have had for years? Why do we continue to handicap the web with an inferior language, when there have always been better options? Must we wait another 2-3 years before we get operator overloading in ECMAScript 7?

The .Net framework is a rich standardised framework with an Intermediate Language (IL). The compiler optimisations, toolset and importantly the security model, make it a vibrant and optimised ecosystem which could be leveraged. It could have been leveraged years ago with a bare minimum Mono CLR.

Google Chrome supports native code, however it runs in a separate process and calls to the DOM must be marshalled through inter-process communication methods. This is not ideal. If the native code support was in the same process it would be a good foundation for Mono.

I believe it is possible, perhaps even trivial, to achieve this nirvana of a Managed Web. We just need to take small considered steps to get there, so here’s my plan.

  1. Simple native code in the same process – Javascript is currently executed on the main thread, presumably through the window message pump executing delegates. These delegates can simply forward to managed function delegates. But first we should be able to trigger an alert window through native code which is compiled inside the Google Chrome code base.
  2. Simple mono support – Fire up Mono, provide enough support in a Base Class Library (BCL) for triggering an alert. This time there will be an IL DLL with a class which implements an Interface for start-up.
  3. Fuller API – With the simple milestones above completed, a complete BCL API can be designed and implemented.
  4. Optimisations – For example, enumerating the DOM may be slowed by crossing the Managed/Unmanaged boundary? jQuery-like functions could be implemented in native code and exposed through the BCL.

Along the way, other stacks and browsers could also leverage our work, establishing support for at least Java as well.

Example API:

IStartup

  • void Start(IWindow window) – Called when the applet is first loaded, just like when Javascript is first loaded (For javascript there isn’t an event, it simply starts executing the script from the first line)

IWindow
see http://www.w3schools.com/jsref/obj_window.asp

IDocument
see http://www.w3schools.com/jsref/dom_obj_document.asp

Warm up – Possible disadvantage

Javascript can be interpreted straight away, and there are several levels of optimisation applied only where needed, favouring fast execution time. IL would need to be JIT’d which would be a relatively slow process, but there’s no reason why it cannot be AOT compiled by the web server. Still I see this as the biggest disadvantage that needs to be front of mind.

Other people around the web who want this

http://tirania.org/blog/archive/2012/Sep-06.html

 

Enhanced by Zemanta

University BC

University is becoming increasingly irrelevant for the IT industry.

It’s 3 years of full-time study, yet in a month, a talented 12 year old kid can write an app that makes him a millionaire. Course content is always lagging behind, not for lack of pushing by academics and industry, the bureaucracy of the system drags. With content such as teamtreehouse.com on the up, there is potential for real upset in the IT education market. And without any entrepreneurship support, there is no excitement and potential to build something meaningful from nothing. Increasingly universities will be perceived as the old way, by new students as well as industry looking to hire.

I would like to see cadet-ships for IT, with online content and part-time attendance to training institutions for professional development and assessment. Although even assessment is questionable: Students are not provided access to the internet during assessments, which does not reflect any true-to-life scenario. I value a portfolio of code over grades.

I seek individuals who have experience in Single Page Applications (SPA), Knockout.js, javascript, jquery, Entity Framework,

C#, 2D drawing, graphic art, SQL (Window Functions). Others are looking for Ruby on Rails developers. All of my recent graduates have very limited exposure to any of these.

I could be wrong, but if I am right, institutions who ignore such facts are only going to find out the hard way. I know the IT industry has been reaching out to Universities to help keep them relevant, it’s time for Universities to reach back out to the industry, and relinquish some of their control for the benefit of both students and the industry.

Enhanced by Zemanta

IL to SQL

There are some cases where one needs to perform some more complex processing, necessitating Application-side processing OR custom SQL commands for better performance. For example, splitting one column of comma delimited data into 3 other columns:

public virtual void DoEntityXSplit()
{
  var NeedSplitting = db.EntityXs.Where(x => !x.Splitted1.HasValue);
  foreach (var item in NeedSplitting)
  {
    string[] split = item.DelimitedField.Split(',');
    item.Splitted1 = split[0];
    item.Splitted2 = split[1];
    item.Splitted3 = split[2];
    item.Save(); //THIS...
  }
  db.SaveChanges(); //OR this
}

When you run DoEntityXSplit, the unoptimised code may run. However if supported, automatic optimisation is possible derived from the IL (Instruction Language – aka. .Net bytecode) body of the method, when:
i) The ORM (Object Relational Modelling – eg. nHibernate / EntityFramework) supports some sort of “ILtoSQL” compilation at all; and
ii) The function doesn’t contain any unsupported patterns or references, then the raw SQL may be run. This could include the dynamic creation of a stored procedure even for even faster operation.

public override void DoEntityXSplit()
{
  //This is pseudo SQL code
  db.RunQuery("
    declare cursor @NeedSplitting as (
      select ID, DelimitedField
      from EntityXs
      where Splitted1 is null
    );

    open @NeedSplitting;
    fetch next from @NeedSplitting into @ID, @DelimitedField
    while (@StillmoreRecords)
    begin
      @Splitted1 = fn_CSharpSplit(@DelimitedField, ',', 0)
      @Splitted2 = fn_CSharpSplit(@DelimitedField, ',', 1)
      @Splitted3 = fn_CSharpSplit(@DelimitedField, ',', 2)

      update EntityX
      set Splitted1 = @Splitted1,
      Splitted2 = @Splitted2,
      Splitted3 = @Splitted3
      where ID = @ID

      fetch next from @NeedSplitting into @DelimitedField
    end
  ");
}

of course this could also be compiled to

override void DoEntityXSplit()
{
  //This is pseudo SQL code
  db.RunQuery("
    update EntityX
    set Splitted1 = fn_CSharpSplit(@DelimitedField, ',', 0),
    Splitted2 = fn_CSharpSplit(@DelimitedField, ',', 1)
    Splitted3 = fn_CSharpSplit(@DelimitedField, ',', 2)
    where Splitted1 is null
  ");
}

but I wouldn’t expect that from version 1 or would I?

Regardless, one should treat IL as source code for a compiler which has optimisations for T-SQL output. The ORM mappings would need to be read to resolve IL properties/fields to SQL fields. It may sounds crazy, but it’s definitely achievable and this project looks like a perfect fit for such a feat.

Where will BLToolKit be in 10 years? I believe ILtoSQL should be a big part of that future picture.

If I get time, I’m keen to have a go, it should be built as a standalone dll which an ORM can leverage. Who knows maybe EF will pick this up?

Enhanced by Zemanta

Poppler for Windows

I have been using the Poppler library for some time, over a series of various projects. It’s an open source set of libraries and command line tools, very useful for dealing with PDF files. Poppler is targeted primarily for the Linux environment, but the developers have included Windows support as well in the source code. Getting the executables (exe) and/or dlls for the latest version however is very difficult on Windows. So after years of pain, I jumped on oDesk and contracted Ilya Kitaev, to both compile with Microsoft Visual Studio, and also prepare automated tools for easy compiling in the future. Update: MSVC isn’t very well supported, these days the download is based off MinGW.

So now, you can run the following utilities from Windows!

  • PDFToText – Extract all the text from PDF document. I suggest you use the -Layout option for getting the content in the right order.
  • PDFToHTML – Which I use with the -xml option to get an XML file listing all of the text segments’ text, position and size, very handy for processing in C#
  • PDFToCairo – For exporting to images types, including SVG!
  • Many more smaller utilities

Download

Latest binary : poppler-0.51_x86.7z

Older binaries:
poppler-0.50_x86.7z
poppler-0.49_x86.7z
poppler-0.48_x86.7z
poppler-0.47_x86.7z
poppler-0.45_x86.7z
poppler-0.44_x86
poppler-0.42.0_x86.7z
poppler-0.41.0_x86.7z
poppler-0.40.0_x86.7z
poppler-0.37_x86.7z
poppler-0.36.7z
poppler-0.35.0.7z
poppler-0.34.0.7z
poppler-0.33.0.7z
poppler-0.26.4.7z
poppler-0.26.3.7z
poppler-0.26.1_x86
poppler-0.26.1
poppler-0.22.0
poppler-0.18.1

We will never meet a space-exploring or pillaging Alien

The thought of Aliens capture the imagination, countless worlds beyond our own with advanced civilizations. But I have strong suspicions that we will never meet an Alien. I’ve always had my doubts, and then I read an article recently which uses very sound reasoning to preclude their existence (I don’t have the reference for the specific one).

DON’T EXIST

It basically goes:

  1. The Universe is roughly 13 billion years old – plenty of time for Aliens to develop technology
  2. The Universe is gigantic – plenty of places for various Aliens to develop technology
  3. We would want to find other Aliens – Other Aliens would want to also look for other life
  4. Why haven’t they found us? Why haven’t we found them?
  5. Because they don’t exist
When we first started surveying space and searching for Aliens we would have found them, they, as we do, would have been transmitting signals indicating intelligence.
NEVER MEET
But there is also another, less compelling, reason. The Universe appears to be expanding, and accelerating that expansion. Unless worm-hole traversal is found to be practically feasible, the whole meeting part will never happen.
OTHER REASONS
Here’s some more links to other blogs and articles I found, which also add some more information and other reasons which logically prove that Aliens don’t exist:
I guess, that one or even several logical reasons cannot prove absolutely that Aliens do not exist, we can only be 99.9% or more confident for example. Unless we search all the cosmos and conclude that none exist, can it be an absolute fact. We could have an Alien turn up tomorrow, and explain they have searched the Universe and only just recently found us, and that it’s only them and us and that their home world is hidden behind another galaxy or nebula or something. So logic alone is not definitive, but it is certainly a good guide if the logic itself is not disproven.
Take Fermat’s Last Theorem for example, it was proven, “358 years after it was conjectured”. There were an infinite amount of solutions to the problem, and so an exhaustive evaluation was not practical, a mathematical verification was required. Many believed it to be true of course, but Mathematics being a science, required proof.
So unless we can prove that Aliens don’t exist with scientific observation, and not just with probability, one cannot say with authority that Aliens don’t exist, but at the same time, one definately cannot believe that Aliens do exist without significant proof.

Windowing functions – Who Corrupted SQL?

I hate writing sub-queries, but I seem to hate windowing functions even more! Take the following:

select
PR.ProfileName,
(select max(Created) from Photos P where P.ProfileID = PR.ID) as LastPhotoDate
from Profiles PR

In this example, I want to list all Profile names, and also include a statistic of the most recent uploaded photo. It’s quite easy and looks a little bloated, but compared to windowing functions, it is slower. Let’s have a look at the more performant alternative:

select
PR.ProfileName,
max(Created) OVER (PARTITION BY PR.ID) as LastPhotoDate
from Profiles PR
join Photos P
on P.ProfileID = PR.ID

That’s actually quite clear (if you are used to using windowing functions) and performs better. But it’s still not ideal, coders now need to learn about OVER and PARTITION just to do something seemingly trivial. SQL has let us down. It looks like someone who creates RDBMS’s told the SQL comittee to add windowing functions to the SQL standard, it’s not user friendly at all, computers are supposed to do the hard work for us!

It should look like this:

select
PR.ProfileName,
max(Created)
from Photos P
join Profiles PR
on PR.ID= P.ProfileID
Group By PR.ID --or Group By PR.ProfileName

I don’t see any reason why an RDBMS cannot make this work. I know that if a person gave me this instruction and I had a database, I would have no trouble. Of course, if different partitioning is required within the query, then there is the option for windowing functions, but for the stock standard challenges, keep the SQL simple!

Now what happens when you get a more difficult situation? What if you want to return the most recently uploaded photo (or at least the ID of the photo)?

--Get each profiles' most recent photo
select
PR.ProfileName,
P.PhotoFileName,
P.PhotoBlob
from Photos P
join Profiles PR
on PR.ID= P.ProfileID
join (
select ProfileID, max(Created) as Created
from Photos
group by ProfileID
) X
on X.ProfileID = P.ProfileID
and X.Created = P.Created

It works but it’s awkward and has potential for performance problems. From my limited experience with windowing functions and short search on the web, I couldn’t find a windowing function solution. But again, there’s no reason an RDBMS can’t make it easy for us, and again the SQL language should make it easy for us!

Why can’t the SQL standards group innovate? Something like this:

select
ProfileName,
P.PhotoFileName,
P.PhotoBlob
from Photos P
join Profiles PR
on PR.ID= P.ProfileID
group by ProfileID
being max(Created) --reads: being a record which has the maximum created field value

And leave it to the RDBMS to decide how to make it work? In procedural coding with a set, while you are searching for a maximum value you can also store the entity which has that maximum. There’s no reason this can’t work.

It seems the limitation is the SQL standardisation body. I guess someone could always implement a work around, create a plugin for opensource SQL query tools, as well as opensource functions to convert SQL+ [with such abilities as introduced above] to SQL.

(By the way I have by no means completely thought out all the above, but I hope it describes the spirit of my frustrations and of the possible solution – I hope some RDBMS experts can comment on this dilemma)

Mining Space

It’s quite an aspirational idea – to even look at mining asteroids in space. It may feel like it’s something unreachable, something that’s always going to be put off to the future. But the creation of a new company Planetary Resources is real, with financial backers and with a significant amount of money behind them. We’re currently in transition. Government, and particularly the U.S. government is minimizing its operational capacity for space missions, while the commercial sector is being encouraged and growing. For example, Sir Richard Branson’s, Virgin Galactic, as well as other organisations are working toward real affordable (if you’re rich..) space tourism and by extension commoditisation of space access in general, bringing down prices and showing investors that space isn’t just for science anymore, you can make a profit.

I recently read a pessimistic article, one where the break-even price for space mining is in the hundreds of millions of dollars for a given mineral. One needs to be realistic, however in this article, I think the author is being way too dismissive. You see, there are many concepts in the pipeline which could significantly reduce the cost of earth-space transit. My most favored is the space elevator, where you don’t need a rocket to reach kilometers above the earth (although you would likely still need some sort of propulsion to accelerate to hold in orbit).

But as well as being across technology, a critic needs to also be open to other ideas. For example, why bring the minerals back to Earth? Why not attempt to create an extra-terrestrial market for the minerals? It may well cost much more to launch a large bulk of materials into orbit, than to extract materials from an asteroid (in the future). With space factories building cities in space.

Of course, I still think space mining is hopeful at best, let’s balance despair with hopeful ideas.

Enhanced by Zemanta

Carbon Tax and EVs

Just a quick one…

There are many reasons Electronic Vehicles (EV) are becoming more popular. Improvements in technology for greater range, production by bigger companies lowering prices. But there is one major driving factor, running cost.

The high and at times, rising price of fossil fuels makes consumers look elsewhere, and think outside their usual comfort zone. Electricity is cheap. Because of this, technology is being researched and major car companies are adjusting.

So what will happen when a Carbon Tax comes into Australia, one which doesn’t increase petrol prices, yet does electricity. Now, I don’t subscribe to the Global Warming scare, I’ve personally read a few papers and read through plenty of commentary to understand that this is a hoax.

However, it seems a contradiction to create regulation which will adversely affect the market, making consumers less likely to choose a “greener” option. (In my opinion EVs are not just cheaper to fuel, but also a lot cheaper to maintain – no engine oil, tuning, timing belt, radiator, etc…).

Enhanced by Zemanta

Are sciences becoming to Philosophical?

Logic and Reason are powerful things and great for debate, however it is also dangerous in the absence of facts. Just because one can reason with logic about an issue, doesn’t mean it is true.

These thoughts are of course provoked somewhat by recent scientific news and debate, particularly on  A Universe From Nothing, but also Anthropogenic Global Warming.

In A Universe From Nothing (UFN), eminent scientists (physicists and cosmologists) put forward the models and analogies of a Universe (and particularly the Big Bang) being completely viable to appear from nothing, without a spiritual force, such as God.

Their theories do make sense, they are well reasoned. From M-Theory (String Theory) to Multi-Verse. There are plenty of models which can describe their hypothesis.

However in the absence of empirical data, observation, there is no way to verify such hypothesis. Just because you have a complex theory which fits together nicely doesn’t mean you have found an objective truth.

In the article, the author writes

can almost put under a lab microscope.

Now, you can either observe or not observe. There is no, “nearly observe”. In M-Theory they have all but ruled out the possibility of observing the theoretical vibrating strings at the center of matter. Perhaps graviton particles can move between dimensions? Perhaps we can observe them? It’s all inconsequential if it cannot be proven. And may as well be called a philosophical statement.

Sometimes complex philosophical arguments, are seemingly easy to break with simpler logic and reason. Sometimes, these so called scientists can get carried away, perhaps believing their logic trumps common sense. Take this quote for example

Indeed, you might ask why it is that we think there is something here at all

Every individual is self-aware, alive. Something (matter) is here. Why make such assertions suggesting there’s nothing (matter) here at all? At best it is a poor analogy to use to frame their theories. And how are they productive? Of course one needs to devise hypothesis, but until there is proof or a pathway to finding proof, why publish such hypothesis? You have to wonder whether these people can indeed be called scientists or fathers of a new religion.

For the Sydney Morning Herald to publish this ridiculous stuff, it must be a slow news day.

Enhanced by Zemanta

What the… Payroll tax?

I didn’t really notice the GST debate, except being annoyed at all prices increasing when GST was introduced (I was in High School). It turns out the a major reason for it’s introduction was to eliminate many state taxes. One of these taxes being Payroll tax….

Have a look: http://www.sro.vic.gov.au/sro/SROnav.nsf/LinkView/8AFF7B9FB4EB3733CA2575D20022223D5DB4C6346AF77ABBCA2575D10080B1F7

It turns out that if I employ too many people I will have to pay the state 4.9% tax on all the gross wages paid to my employees – including Superannuation! Not only is this a disincentive to employ, it’s also yet another administrative burden which limits growth. I hear it all the time, that ultimate success requires flexibility and scalability – Payroll tax is an ugly and unnecessary burden.

Sure we can’t just pull such revenue out from under the states, but it can be replaced with revenue from another more efficient tax – such as GST. At just 10% our GST is relatively low compared to other countries, in Europe some countries have a GST or VAT of 25%.

So why not simply increase GST? Consumers, AKA voters are the end-users and effectively the ones who pay the tax. Even though consumers can ultimately pay less in the long run, because the companies no longer need to pay payroll tax, the whole economy changes. Smaller business that didn’t previously pay Payroll tax are effectively charging their customers more, because they cannot discount from regained revenue from a dropped tax. Small changes to the rate over a long time may work best with matched reductions in payroll tax in the states. But in summary GST rate increases are political poison for non-business owning voters.

Another issue is fraud. As GST increases, the returns on VAT fraud become greater. Countries such as Sweden (25%) and the UK (20%) are subjected to simple but hurtful frauds which effectively steal from GST revenue. It basically works by having a fake company be liable to pay GST, and a legitimate company entitled to the return. The fake company goes bankrupt. As the GST rate increases, the amount of payback to such frauds increases, encouraging more incidents. It seems that any macro economic change, either short term (Government Stimulus) or long term (Tax Reform), opens the door for corruption and rorting. If the GST rate is to be increased, the right legislation needs to be in place to prevent such fraud.

So in the end the ultimate way for a business to overcome Payroll tax is to innovate good products which provide a comfortable return and innovate inside the business to improve internal efficiency, reducing the need to hire as many staff resulting in the ability to maintain a competitive edge.

DIDO – Communication history unfolding?

First of all I just want to say – THIS MAY BE HUGE!!

I read this article last night: http://www.gizmodo.com.au/2011/08/dido-tech-from-quicktime-creator-could-revolutionise-wireless-broadband/

In plain english, a company has discovered a way to dramatically improve mobile internet. It will be 5 – 10 years before it’s commercialised, however I believe it will happen sooner, with many realising just how revolutionary it will be, investing more money, attracting more resources to get it done sooner.

I am not a representative of the compnay, but have been involved in understanding and pondering wireless technology, even coming up with faster and more efficient wireless communication concepts, but none as ground-breaking as this one. I don’t claim to know all the details for certain, but having read the whitepaper I beleive I can quite accurately assume many details and future considerations. Anyway I feel it’s important for me to help everyone understand it.

How does it work (Analogy)?

Imagine walking down the street, everything is making noise, cars, people, the wind. It’s noisy, someone in the distance is trying to whisper to you. Suddenly all the noise dissappears, and all you can hear is that person – clearly. This is because someone has adjusted all the sounds around you to cancel out, leaving just that persons voice.

How does it work (Plain English)?

When made available in 5-10 years:

  • Rural users will have speeds as fast as in CBDs, receving signals from antennas as far as 400km away!
  • In cities there will need to be several antennas in within ~60km of you
    • today there are so many required for mobile internet, the number will be reduced…
    • the number of antennas per mobile phone towers and buildings will be reduced to just one.
  • there will be a central “server” performing the mathematical calculations necessary for the system.

The most technical part (let’s break it down):

  1. Unintended interference is bad (just to clarify and contrast)…
  2. DIDO uses intereference, but in a purposeful way
  3. DIDO uses multiple antennas, so that at a particular place (say your house), they interfere with each other in a controlled way, leaving a single channel intended for you.
  4. It’s similar to how this microphone can pick up a single voice in a noisy room – http://www.wired.com/gadgetlab/2010/10/super-microphone-picks-out-single-voice-in-a-crowded-stadium/
    but a little different…

How does it work (Technical)?

I have been interested in two related concepts recently:

  1. Isolating a single sound in a noisy environment – http://www.wired.com/gadgetlab/2010/10/super-microphone-picks-out-single-voice-in-a-crowded-stadium/
  2. I saw an interview with an ex-Australian spy who worked at a top secret facility in Australia in co-operation with the US. The guy was releasing a book revealing what he can. From this facility he spied on radio communications around the world. I wondered how and then figured they likely employ the “super microphone” method.

When I heard about this technology last night, I didn’t have time to look at the whitepaper, but assumed the receivers may have “super microphone” sort of technology. It turns out the inverse (not opposite) is true.

Scenario:

User A’s radio is surrounded by radios from DIDO. The DIDO server calculates what signals need to be generated from the various radios such that when converging on User A, they “interfere” as predicted to leave the required signal. When there are multiple users the mathematical equations take care of working out how to converge the signals. As a result, the wireless signal in the “area of coherence” for the user, is as if the user has the full spectrum 1:1 to an external wireless base station.

Implications for domestic backhaul

There would need to be fibre links to each of the antennas deployed, but beyond that remaining backhaul and dark fibre will rapidly become obsolete. DIDO can reach 400km in the rural mode, bouncing off the ionosphere and still maintaining better latency than LTE at 2-3ms.

Physical Security?

We hear about quantum communication and the impossibility to decipher the messages. I believe a similar concept of physical security can be achieved with DIDO. Effectively DIDO provisions areas of coherency. Areas in 3D space where the signals converge, cancelling out signal information intended for other people. So effectively you only physically receive a single signal on the common spectrum, you can’t physically see anyone else’s data, unless you are physically in the target area of coherency. This however, does not mean such a feature enables guaranteed privacy. By deploying a custom system of additional receivers that can sit outside the perimeter of your own area of coherency, you can sample the raw signals before they converge. Using complex mathematics and empowered with information of the exact location of the DIDO system antennas, one would be theoretically able to single out the individual raw signals from each antenna, and the time of origin and then calculate the converged signal at alternative areas of coherence. This is by no means a unique security threat. Of course one could simply employ encryption over their channel for secrecy.

This doesn’t break Shannon’s law?

As stated in their white paper, people incorrectly apply the law to spectrum rather than channel. Even before DIDO, one could use directional wireless from a central base station and achieve 1:1 channel contention (but that’s difficult to achieve practically). DIDO creates “areas of coherency” where all the receiving antenna picks up is a signal only intended for them.

Better than Australia’s NBN plan

I’ve already seen some people attempt to discredit this idea, and I believe they are both ignorant and too proud to give up their beloved NBN. I have maintained the whole time that wireless technology will exceed the NBN  believers interpretation of Shannon’s law. Remember Shannon’s law is about *channel*, not *spectrum*. DIDO is truly the superior option, gigabit speeds with no digging! And clearly a clear warning that governments should never be trusted with making technology decisions. Because DIDO doesn’t have to deal with channel access, the circuitry for the radios is immensely simplified. The bottleneck will likely be the ADC and DACs, of which NEC has 12bit 3.2Giga-sample devices (http://www.physorg.com/news193941421.html). So multi-terabit and beyond is no major problem as we wait for the electronic components to catch up to the potential of wireless!

CRITICISMS UPDATE:

  • One aspect to beware of is the potential need for 1:1 correlation of antennas from the base station and users. I can’t find any literature yet which either confirms or denies such a fixed correlation. But the tests for DIDO used 10 users and 10 antennas.
  • If there must be one antenna per user this idea isn’t as earth shattering as I would hope. However there would still be relevance. 1) It still achieves 100% spectrum reuse, 2) all the while avoiding the pitfalls of centralised directional systems with beam-forming where obstacles are an issue. 3) Not to mention the ability to leverage the ionosphere for rural applications – very enabling.
  • After reading the patent (2007) – I see no mention of the relationship between AP antennas and the number of users. However I did see that there is a practical limit of ~1000 antennas per AP. It should be noted that if this system does require one antenna per user, it would still be very useful as a boost system. That is, everyone has an LTE 4G link and then when downloading a video, get the bulkier data streamed very quickly via DIDO. (The amount of concurrent DIDO connections being limited by the number of AP antennas)
  • The basis for “interference nulling” discussed in 2003 by Agustin et al. http://citeseerx.ist.psu.edu/viewdoc/summary?doi=10.1.1.10.2535
  • Removed many  ! at the top, to symbolise the potential for disappointment.
  • Hey there’s a spell check!
  • Have a look here for whirlpool discussion: http://forums.whirlpool.net.au/forum-replies.cfm?t=1747566

Memorable IPv6 Addresses

Back in Nov 2009, I foresaw that IPv6 addresses would become a menace to memorise. So I had a crack at improving memorability of such addresses, See http://blog.jgc.org/2011/07/pronounceable-ipv6-addresses-wpa2-psk.html?m=1. The basic idea is that sounds which make up words or resemble words are much easier to remember than individual digits. I was actually thinking about this idea last night, how it could be applied to remembering strong passwords.

This morning I got an email from a collegue who pointed out this: http://www.halfbakery.com/idea/IPv6_20Worded_20Addresses#1260513928. I don’t believe the scheme used here is as memorable as mine, but it sounds like other people are having similar ideas.

Back to my thoughts last night on more memorable passwords. We know we’re supposed to use Upper and Lower case, special symbols etc. But even then you’re not using the full 64bit capacity of the full 8 character recommended string. To use my scheme to memorise more secure passwords, you would simply use my tool.

I made a video 🙂

[youtube=http://www.youtube.com/watch?v=f60GGxPskG4]