The Phone is dead; long live Web Phone

Dial 1 for clock radio sound quality;
Dial 2 for abrupt disconnection.

Please put your life on hold while find the next available inconvenience;
Your call is important to us, and whatever you do:
Don’t. Hang. Up.

Telephony is a terrible experience that disappoints us every day. Long phone numbers, dropped calls, call queues, and monotonous hold music are only some of the problems we’re all familiar with.

We’re using a prehistoric telecommunications system, handcuffed to the past. The obligation of keeping a person on an active and continuous “call” was cemented with the creation of the public telephone system in Berlin, Germany in 1877. The telephone system with analogue lines and people had jobs as “operators” to physically plug in wires to connect you to another person. Improvements were made over the years and to this day. First a transition to automatic connection with pulse dialing, then tones. At first, specialised electronic systems were used, and now computers with data networks over fibre optics. VoIP is the most recent notable extension, but what we ended up with was merely an internet line with the same terrible experience. Continuous “calls” started as a necessity from wires, but that’s what holds back telephony to this day.

That’s all about to change.

The Web is coming to the rescue. With continuous improvement to the web and web browsers, it’s now possible for a fresh approach, and leave the old phone system behind. This isn’t VoIP, this is talking to people through your web browser for free with a User Interface. A Web feature was standardised May 2017 called WebRTC, and this is key to making this possible. This allows your browser to communicate directly with another browser with no phone company inbetween, requiring no software to be installed.

With the web, the experience of telephony can be reimagined and redefined. Picture this –

You visit on your mobile phone and visit the contact page looking for the phone number. Instead of a phone number, you find the Web-Phone button, and you tap it. Up comes an animation showing an attempt to connect the call, then you see the message “Connected”. You put the phone on your ear and start speaking to the person on the other side with High Definition audio quality. If there’s a break in internet connectivity, a new connection can be made, but it’s still known to be you, it’s still the same “call”.

This is just the tip of the iceberg – there is much more that Web-Phones will be able to do, and many problems that it can solve for different industries. Make sure you follow me and HordernIT so you can learn about new capabilities and how they can help you.

The Web Phone has been possible for many years, but the technology pieces still need to polished and packaged, but importantly the telephony culture needs to change. Every website needs to have a Web-Phone button, and the old phone number will remain alongside for quite some time during the transition.

Individual businesses and economies of the world can benefit from greater productivity and better customer service from clearer communication and better reliability.

Todd is a futurist and tech evangelist with HordernIT. Enquire using our contact form if you would like market-leading capabilities within your business.

Packed URL

Shortening services like TinyURL exist because short strings are hard to “compress”. I recently needed to compare a URL. I couldn’t use a URL shortener, because each URL is only used once, and they key changes on each one.

So I built my own algorithm which others will be able to use. This could also be used for QR Code data, and for Web Page Proxies which usually contain the URL as a parameter.

Let’s work backwards from the results:

We use the following input string which is 62 characters:

With no packing and then converting to base64 encoding it’s 112 characters and like this:


With GZip, it needs to include a header, so on short strings you actually go backwards encoding to 124 base64 characters:


So what I achieved is significant and specialised for URLs being only 64x base64 characters, that’s 57% the size of the original base64 string:


The fact that my scheme is “specialised” is important. That’s why mine beats the general GZip compression. My probably wouldn’t beat GZip on larger code files, but that’s not what it’s for.

How it works

It works by digesting a URL in a predictable way. URLs are predictable, they’re like a specific English sentence structure. Here’s a breakdown of the example URLs:

[https://] [] [stackoverflow] [.] [com] [] [/] [questions] [/] [1192732] [/] [really-simple-short-string-compression] []

  • [https://] – The URL scheme is either HTTP or HTTPS
    • 1 bit instead of 56 bits
  • [] – There’s no @, so no username or password – 1 bit = 0
  • [stackoverflow] they are all lower case characters so a 5-bit encoding is used (65 bits), plus header (3-bits) to indicate it’s a 5-bit string, plus 3-bit-block number length (6-bits)
    • 74 bits instead of 104 ~ Running total: 76
  • [.] – In the host section, we expect multiple dots. So after every string section we encode whether there is a dot with a single bit – 1 bit down from 8
  • [com] is in our “host” dictionary which has 16 values so 4-bit, plus the header (3-bit) to indicate it was a dictionary lookup
    • 7-bits instead of 24 bits ~ Running total: 84
  • There are no more dots – 1 bit set to 0
  • [] There’s no :, so no port number override
  • [/] Now we expect folder forward slashes –
    • 1 bit set to 1 instead of 8 bits ~ Running total: 87
  • [questions] is encoded as a 5-bit string –
    • 45 bits + 3 + 6 – 54 bits instead of 72 bits ~ Running total: 141
  • [/] – 1 bit set to 1 instead of 8 bits
  • [1192732] is a number which is encoded as an 8-bit-block number needing 3 blocks + 3bit header – 27bits instead of 56 bits
  • [/] – 1 bit set to 1 instead of 8 bits
    • ~ Running total: 170
  • [really-simple-short-string-compression] – 190bits + 3 + 9 – 202 bits instead of 304 bits ~ Running total: 372
  • There’s no dot – 1 bit = 0
  • [] There’s no question mark for params – 1 bit = 0
  • total: 374

It’s possible to add your own custom values into the segment dictionaries, this would be handy if you can create a custom implementation for your application, you can have common api folder paths across multiple hosts, common subdomain names, common parameter keys, and working with specific TLDs.


Data Types

  • Expected http scheme – 1 bit
  • Expected dot – 1 bit
  • Expected folder – 1 bit
  • 4-bit string – most popular lowercase English language characters
  • 5-bit string – lower case English alphabet characters, and also – _ + which are common in URLs
  • 6-bit string – base64 using – _ and + for padding
  • GUID – 128-bit
  • Number – 8-bit block (with 1-bit extension)
  • Expected port number – 16-bit ushort
  • (Reserved) – Hex? Base-7 for special chars? Mixed (Dictionary Text)?


It would be nice if it could be further reduced to around 25% of the size 80% of the time, but I don’t think that is possible at all with the current scheme and it is likely theoretically impossible. 50% is likely close to the maximum compression limit.

  • Connect with other people and groups to help contribute. see (Thanks Paul Brooks)
  • IP addresses as hosts (thanks Paul Brooks!) – IPv4 and IPv6
  • Punycode Support (thanks Paul Brooks!) – for unicode support
  • Fill 3 remaining unassigned characters for the 5-bit text scheme
  • 7-bit base-128 to support special characters that are allowed in URL, ie. { , ! } and more
    • Or, perhaps remove 4-bit text strings replacing that with special characters only, AND then ADD a mixed-type segment type which can include Dictionary, String, Data
  • Distinct URL Text to URL Data Model step – so we can leverage a mature URL library to ensure we don’t mis-parse a URL. Then the URL Data Model can be binary (bitstream) serialised. see and
  • Release the code opensource for others to learn and expand on.
  • PackingHashes (bookmarks) – technically not needed, because servers don’t get sent these values nor process them.
  • Decoder – this is only academic so far with a prototype encoder.
  • My system already includes a 4-bit (16 letters) string encoder for when there’s only lowercase characters and the less common alphabet characters are not used.
  • I intend to ignore uppercase characters when selecting the 5-bit encoding except for parameter values. But some web servers and underlying file systems are case-sensitive so this would be an optional configuration of the encoder.
  • Mixed Dictionary and Letters – allow a match on the dictionary as well as text encoding inside a URL segment. This mode would be run alongside the normal text-only mode and the shorter solution used.
  • Include an English dictionary for common partial word segments like ‘ies’. This might only be useful for longer word segments given the overhead of Mixed Dictionary and Letters.
  • Hardcoded and common dictionaries with a header number to indicate which dictionary set to use. For example they could be regional – Australia has,, etc.. for Hosts. Also, popular hostnames such as facebook, google, bing, instagram, etc..
  • ParamsOnly mode – where the prefix is infered by the application context, but the params are the variable part of the URL string.
  • Javascript Version – currently it’s written only in C#.

Don’t use Resource Strings with C#

I recommend separate static class files with static read-only strings.


  • The XML Resource files are hard to source-control
  • The UI for resource strings are hard to scroll through and edit in-place

It’s better when strings are in code-files, with multi-line strings using @”” or [email protected]””. Maybe append “Resources” at the end of the class name, as a convention.


  • You can use any coding techniques with them
  • You can be more cohesive, create multiple separate classes
  • You can have static functions to take and apply parameters
  • You use the normal text editor
  • You can press F12 on a reference, and get directly to editing the string
  • No XML to deal with – less merge conflicts


Why did Open Source Bounties Fail?

I’m shocked. I thought Bounties would supercharge Open Source development. You were the chosen one! (cringe)

So today, I wanted to post a bounty for Stasher. I did so on BountySource, but then I realised it was broken and abandoned. I looked further afield and it’s the same story, a digital landscape littered with failures.

Bounty Source

Is one of the better ones, limping along. They need a serious financial backer to grow their community faster.

  1. They seem to have a lot of server issues. Have a look at their recent twitter feed []
  2. When I posted my bounty, I did expect a tweet to go out from their account (as per my $20 add-on). Nothing. Either that subsystem is broken, or it has never been automated.
  3. Bounty Search is broken – “Internal server error.” in the console log.
  4. We know what I think about good security architecture. If people can’t talk about security correctly, it doesn’t matter if they know about bcrypt, but can they properly wield its power?
  5. No updates on Press since 2014

Freedom Sponsors

They don’t have enough of a profile, to excite me about their future. This has been executed on a shoe-string budget apparently. (I’ll try posting a bounty here if the Bounty Source one lapses)

  1. Only 12 bounties posted this year (Jan-Nov) – only 4 of those have workers, 2 of those look inactive. But at least search works.
  2. Their last Tweet was 2012

Others, are down.


This shouldn’t have happened. It failed because these startups ran out of cash and motivation.

There is massive potential here. So far we’ve seen MySpace, we need Facebook execution. And whoever does this, needs a good financial backer with connections to help grow the community.

I hope to see an open source foundation, maybe Linux Foundation, buy Bounty Source.

Stasher – File Sharing with Customer Service

(This is quite a technical software article, written with software coders in mind)

It’s time for a new file sharing protocol. P2P in general is no longer relevant as a concept, and central filesharing sites show that consumers are happy with centralised systems with a web interface. I think I have a good idea for the next incremental step, but first some history.

It’s interesting that P2P has died down so much. There was Napster and other successes which followed, but BitTorrent seems to have ruled them all. File discovery was lost, and with Universal Plug and Play a big security concern, even re-uploading is not on by default.

P2P is no longer needed. It was so valuable before, because it distributed the upload bandwidth, and also anonymised somewhat. But bandwidth continues to fall in price. MegaUpload and other like it were actually the next generation, and added some customer service around the management of the files, and charged for premium service. Dropbox and others have sort of carved out even more again.

Stash (which is hopefully not trademarked), is my concept to bring back discovery. It’s a different world, where many use VPNs and even Tor, so we don’t need to worry about security and anonymity.

It’s so simple, it’s easy to trust. With only a few hundred lines of code in a single file, one can compile their own, on Windows in seconds. So there can be no hidden backdoors. Those users who can’t be bothered with that, can download the application from a trusted source.

It works by being ridiculously simple. A dumb application which runs on your computer, is set-up to point to one or more servers. It only operates on one folder, the one it resides in. From there the servers control Stasher. A client can do any of the following, and can ban a server from doing a particular action.

And that’s it. It’s so basic, you should never have to update the client. New features should be resisted. Thumbnails should be generated on the server – because there is time and bandwidth to simply get the whole file.

All with varying software on the server, but the same Stash client. There is no direct P2P, however several servers can coordinate, such that a controller server can ask a client to upload to another specific server. Such a service can pre-package the Stash client with specific servers. Then throughout the lifetime, the client server list can be updated with new servers.

I’m thinking of building this, but I’m in no rush. I’ll make it open source. Can you think of any other applications for such a general-purpose file sharing framework?

For more information, see


Security measures ideas:

  • [Future] Code Virtual Machine
    • Only System and VM namespaces used
    • VM namespace is a separate small DLL which interacts with the system { Files, Network, System Info }
    • It’s easier to verify that the VM component is safe in manual review.
    • It’s easy to automatically ensure the application is safe
    • Only relevant for feature-extended client, which will span multiple files and more
  • [Future] Security analyser works by decompiling the software – ideally a separate project

Remaining problems/opportunities:

  • Credit – who created that original photo showing on my desktop? They should get some sort of community credit, the more votes they get. Need some sort of separate/isolated server which takes a hash and signs/stores it with datetime and potentially also with extra meta-data such as author-name/alias
    • Reviewers, while not as important should also be able to have their work registered somewhere. If they review 1000 desktop backgrounds, that’s time. Flickr for example could make a backup of such credit. Their version of the ledger could be signed and dated by a similar process.
  • Executable files and malware – 
    • AntiVirus software on the client
    • Trusting that the server makes such checks – eg. looking inside non-executables even for payloads. ie. image file tails.
  • Hacked controller
    • File filters on the client to only allow certain file types (to exclude executable files) – { File extensions, Header Bytes }
    • HoneyPot Clients – which monitor activity, to detect changes in behavior of particular controllers
    • Human operator of controller types in a password periodically to assure that it’s still under their control. Message = UTCTimestamp + PrivateKeyEncrypt(UTCTimestamp), which is stored in memory.

Try Scope Catch Callback [TSCC] for ES6

So it has started, it wasn’t a hollow thought bubble, I have started the adventure beyond the C# nest []. It will take a while, because I still have a lot of software that still runs on C#, and I do still like the language, but all new development will be on ES6 and NodeJS.

So I’m going to record my outlook over a few blog posts. I re-discovered Cloud9 IDE, and I’ve got a few thoughts on architecture and a new feature for ES6.

Today, I’ll tell the world about my proposed ES6 enhancement.

Despite the ECMAScript committee stating they are about “Standards at Internet Speed”, there isn’t much Internet tooling in there to make it happen. They have certainly been successful making rapid progress, but where does one submit an idea to the committee? There’s not even an email link. I’m certainly not going to cough up around $100k AUD to become a full member. [Update: They use GitHub, a link on their main website to this would be great. Also check out:]

So I’ll be satisfied to just put my first ES6 idea here.

Try blocks don’t work in a callback world. I’m sure there are libraries which could make this nicer. In C# Try blocks do work with the async language features for instance.

So here is some code which won’t catch an error

    $http.get(url).then((r) => {
catch (e)

In this example, if there is an error during the HTTP request, it will go uncaught.

That was simple, though. How about a more complex situation?

function commonError(e) {

    runSQL(qry1, (result) => {
        runSQL(qry2, (result) => {
        }, commonError)
catch (e)

Callback nesting isn’t very nice. This is why `await` is pushed forward as a good candidate. But what if the API you target doesn’t implement Promise? What if you only sometimes define a try block?

My proposal is to supply a method which gets the Try Scope Catch Callback [TSCC]. If you don’t return a promise, it would be like this:

function get(url, then, error) {
  var error | window.callback.getTryScopeCatchCallback(); //TSCC

  //error occurs:

  //This could be reacting another 
  //try/catch block or as a result 
  //of callback from another error method

Promises already have a catch function in ES6. They’re so close! A Promise should direct its the error/catch callback to the TSCC by default. If the Promise spec was updated to include this, my first example of code above would have caught the error with no changes in code.

So what do you think ECMA members, can we get this into ECMAScript?

Feedback log – from [email protected] maillist

  • kdex

Why not just transform callback-based APIs into `Promise`s and use (presumably ES2017)
`await`/`async` (which *does* support `try`/`catch`)?

e. g.:
try {
await curl(““);
/* success */
catch (e) {
/* error */

  • My response

1. Whether you await or not, the try scope’s catch callback [TSCC] should still be captured.

2. If there is no use of Promise (for coders own design reasons) the try scope’s catch callback [TSCC] should be available

Why I want to leave C#

Startup performance is atrocious, critically, that slows down development. It’s slow to get the first page of a web application, navigating to whole new sections, and worst: initial Entity Framework LINQ queries.

It’s 2016, .Net is very mature but this problem persists. I love the C# language much more above Java, but when it comes to the crunch, the run-time performance is critical. Yes I was speaking of startup performance, but you encounter that in new areas of the software warming up and also when the AppPool is recycled (scheduled every 13 hours by default). Customers see that most, but it’s developers who must test and retest.

It wastes customers and developers time. Time means money but the hidden loss is focus. You finally get focused to work on a task, but then have to wait 30 seconds for an ASP.NET web page to load up so you can test something different. Even stopping your Debugging in VS can take 10s of seconds!

There are told ways to minimise such warmup problems, with native generation and EF query caching. Neither are a complete solution. And why workaround a problem not experienced in node.js and even PHP!

.Net and C# are primarily for business applications. So how important is it to optimise a loop over millions of records (for big data and science) over the user and developer experience of run and start with no delay?

Although I have been critical of Javascript as a language, recent optimisation are admirable. It has been optimised with priority for first-use speed, and critical sections are optimised as needed.

So unless Microsoft fix this problem once and for all, without requiring developers to coerce workarounds, they’re going to find long term dedicated coders such as myself shifting to Javascript, especially now that ECMAScript and TypeScript make Javascript infinitely more palateable.

I have already recently jettisoned EF in favour of a proprietary solution which I plan to open source. I also have plans for node.js and even my own IDE which I plan to lease. I’m even thinking of leaving the Managed world altogether – Heresy!

.Net has lots going for it, it’s mature and stable, but that’s not enough anymore. Can it be saved? I’m not sure.

Lets leave Javascript behind

Disclaimer: I am sure Javascript will continue to be supported, and continue even to progress in features and support, regardless of any Managed Web. Some people simply love it, with all the flaws and pitfalls, like a sweet elderly married couple holding onto life for each other.

It’s great what the web industry is doing with ECMAScript, from version 6 we will finally see something resembling classes and modules. But isn’t that something the software industry have had for years? Why do we continue to handicap the web with an inferior language, when there have always been better options? Must we wait another 2-3 years before we get operator overloading in ECMAScript 7?

The .Net framework is a rich standardised framework with an Intermediate Language (IL). The compiler optimisations, toolset and importantly the security model, make it a vibrant and optimised ecosystem which could be leveraged. It could have been leveraged years ago with a bare minimum Mono CLR.

Google Chrome supports native code, however it runs in a separate process and calls to the DOM must be marshalled through inter-process communication methods. This is not ideal. If the native code support was in the same process it would be a good foundation for Mono.

I believe it is possible, perhaps even trivial, to achieve this nirvana of a Managed Web. We just need to take small considered steps to get there, so here’s my plan.

  1. Simple native code in the same process – Javascript is currently executed on the main thread, presumably through the window message pump executing delegates. These delegates can simply forward to managed function delegates. But first we should be able to trigger an alert window through native code which is compiled inside the Google Chrome code base.
  2. Simple mono support – Fire up Mono, provide enough support in a Base Class Library (BCL) for triggering an alert. This time there will be an IL DLL with a class which implements an Interface for start-up.
  3. Fuller API – With the simple milestones above completed, a complete BCL API can be designed and implemented.
  4. Optimisations – For example, enumerating the DOM may be slowed by crossing the Managed/Unmanaged boundary? jQuery-like functions could be implemented in native code and exposed through the BCL.

Along the way, other stacks and browsers could also leverage our work, establishing support for at least Java as well.

Example API:


  • void Start(IWindow window) – Called when the applet is first loaded, just like when Javascript is first loaded (For javascript there isn’t an event, it simply starts executing the script from the first line)



Warm up – Possible disadvantage

Javascript can be interpreted straight away, and there are several levels of optimisation applied only where needed, favouring fast execution time. IL would need to be JIT’d which would be a relatively slow process, but there’s no reason why it cannot be AOT compiled by the web server. Still I see this as the biggest disadvantage that needs to be front of mind.

Other people around the web who want this


Inverse Templates

Hackathon project – Coming soon….

[Start Brief]
Writing open source software is fun, but to get recognition and feedback you need to finish and promote it. Todd, founder of Alivate, has completed most of the initial parts of a new open source project “Inverse Templates”, including most of the content below, and will work with this week’s hackathon group to publish it as an isolated open source project, and NuGet package.

Skills to learn: Code Templating, Code Repositories, NuGet Packages, Lambda, Text Parsing.
Who: Anyone from High School and up is encouraged to come.

We will also be able to discuss future hackathon topics and schedule. Don’t forget to invite all of your hacker friends!

Yes, there will be Coke and Pizza, donated by Alivate.
[End Brief]

The Problem

Many template engines re-invent the wheel. Supporting looping logic, sub-template and many other features. Any control code is also awkward, and extensive use makes template files look confusing to first-time-users.

So why have yet another template engine, when instead you can simply leverage the coding language of your choice, and your skills and experience hard fought?

The Solution

Normal template engines have output content (HTML for example) as the first class citizen, with variables and control code being second class. Inverse Template systems are about reversing this. By using the block commenting feature, of at least C-Like languages, Inverse Template systems let you leverage the full power of your programming language.

At the moment we only have a library for C# Inverse Templates. (Search for the NuGet Package, or Download and reference the latest stable DLL)

Need a loop? Then use a for, foreach, while, and more.
Sub-templating? Call a function, whether it’s in the same code-file, in another object, static or something more exotic.

Introductory Examples

Example T4:

<# foreach (var Person in People) { #>
Hello <#= Person.Name #>, great to iterate you!
<# } #>

Example Inverse Template:

foreach (var Person in People) {
Hello */w(Person.Name);/*, great to iterate you!*/

As you can see, we have a function named w, which simply writes to the output file. More functions are defined for tabbing, and being an Inverse Template you can inherit the InverseTemplate object and extend as you need! These functions are named with a single character, so they aren’t too imposing.

As with T4 pre-processing, Inverse Template files are also pre-processed, converting comment blocks into code, then saved as a new file which can be compiled and debugged. Pre-processing as opposed to interpreted templates are required, as we rely on the compiler to compile the control code. Furthermore, there are performance benefits to pre-processed (and compiled) templates, as opposed to interpreted.

Example pre-processed Inverse Template:

foreach (var Person in People) {
t("Hello ");w(Person.Name);

Function l, will output any tabbing, then content, then line-ending
Function n, will output the content followed by line-ending
Function t, will prefix any tabbing followed by content

The pre-processor will find and process all files in a given folder hierarchy ending with “.ct.cs”. The pre-processor is an external console application, so that it will even work with Express editions of Visual Studio.

You should:

  • Put all of your Definitions into folder .\InverseTemplates\Definitions\, sub-folders are ok
  • Actively exclude and then re-include the generated .\InverseTemplates\Processed\, folder after pre-processing
  • Exclude the Definitions folder before you compile/run your project

Not the answer to all your problems

I’m not claiming the Inverse Templates are the ultimate solution. They’re simply not. If you have a content heavy templates with no control code and minimal variable merging, then perhaps you want to just use T4.

Also, you may find that you’re more comfortable using all of the InverseTemplate functions directly {l,n,w,t}, instead of using comment blocks. In some cases this can look more visually appealing, and then you can bypass the pre-processing step. This could be particularly true of templates where you have lots of control code and minimal content.

But then again, keep in mind that your code-editor will be able to display a different colour for comment blocks. And perhaps in the future your code-editor may support InverseTemplates using a different syntax highlighter inside your comment blocks.

For a lot of work I do, I’ll be using Inverse Templates. I will have the full power of the C# programming language, and won’t need to learn the syntax of another template engine.

I’m even thinking of using it as a dynamic rendering engine for web, but that’s more of a curiosity than anything.

Advanced Example – Difference between Function, Generate and FactoryGet

class TemplateA : InverseTemplate {
public override Generate() {
/*This will be output first, no line-break here.*/
FunctionC(); //A simple function call, I suggest using these most often, mainly to simplify your cohesive template, when function re-use is unlikely.
Generate(); //This is useful when there is some function re-use, or perhaps you want to contain your generation into specific files in a particular structure
IMySpecial s = FactoryGet("TemplateD"); //This is useful for more advanced situations which require either a search by interface implementation, with optional selection of a specific implementation by class name.
private void FunctionC() {
After a line-break, the is now the second line, with a line-break.
class TemplateB : InverseTemplate {
public override Generate() {
/*This will be the third line.*/
interface IMySpecial
void SpecificFunction(string SpecificParameter);
class TemplateD : InverseTemplate, IMySpecial
public SpecificFunction(string SpecificParameter) {
/* This will follow on from the the */w(SpecificParameter);/*line.
class TemplateF : InverseTemplate, IMySpecial
//Just to illustrate that there could be multiple classes implementing the specialised interface

Advanced – Indenting

All indent is handled as spaces, and is tracked using a stack structure.

pushIndent(Amount) will increase the indent by the amount you specify, if no parameter is specified, the default is 4 spaces.
popIndent will pop the amount of indent on the stack, the last amount pushed.
withIndent(Amount, Action) will increase the indent only for the duration of the specified action.


withIndent(8, () => {
/*This will be indented by 8 spaces.
And so will this, on the next line.
I recommend you only use this when calling a function.*/
/*This will not be indented.*/
/*Within a single function you should
control your indent manually with spaces.*/
if (1 == 1) {
it will be easier to see compared to calls to any of the indent functions {pushIndent, withIndent, etc..}*/
if (2 == 2) {
just keep your open-comment-block marker anchored in-line with the rest*/

These are all the base strategies that I currently use across my Inverse Templates. I also inherit InverseTemplate and make use of the DataContext, but you’ll have to wait for another time before I explain that in more detail.

Windowing functions – Who Corrupted SQL?

I hate writing sub-queries, but I seem to hate windowing functions even more! Take the following:

(select max(Created) from Photos P where P.ProfileID = PR.ID) as LastPhotoDate
from Profiles PR

In this example, I want to list all Profile names, and also include a statistic of the most recent uploaded photo. It’s quite easy and looks a little bloated, but compared to windowing functions, it is slower. Let’s have a look at the more performant alternative:

max(Created) OVER (PARTITION BY PR.ID) as LastPhotoDate
from Profiles PR
join Photos P
on P.ProfileID = PR.ID

That’s actually quite clear (if you are used to using windowing functions) and performs better. But it’s still not ideal, coders now need to learn about OVER and PARTITION just to do something seemingly trivial. SQL has let us down. It looks like someone who creates RDBMS’s told the SQL comittee to add windowing functions to the SQL standard, it’s not user friendly at all, computers are supposed to do the hard work for us!

It should look like this:

from Photos P
join Profiles PR
on PR.ID= P.ProfileID
Group By PR.ID --or Group By PR.ProfileName

I don’t see any reason why an RDBMS cannot make this work. I know that if a person gave me this instruction and I had a database, I would have no trouble. Of course, if different partitioning is required within the query, then there is the option for windowing functions, but for the stock standard challenges, keep the SQL simple!

Now what happens when you get a more difficult situation? What if you want to return the most recently uploaded photo (or at least the ID of the photo)?

--Get each profiles' most recent photo
from Photos P
join Profiles PR
on PR.ID= P.ProfileID
join (
select ProfileID, max(Created) as Created
from Photos
group by ProfileID
) X
on X.ProfileID = P.ProfileID
and X.Created = P.Created

It works but it’s awkward and has potential for performance problems. From my limited experience with windowing functions and short search on the web, I couldn’t find a windowing function solution. But again, there’s no reason an RDBMS can’t make it easy for us, and again the SQL language should make it easy for us!

Why can’t the SQL standards group innovate? Something like this:

from Photos P
join Profiles PR
on PR.ID= P.ProfileID
group by ProfileID
being max(Created) --reads: being a record which has the maximum created field value

And leave it to the RDBMS to decide how to make it work? In procedural coding with a set, while you are searching for a maximum value you can also store the entity which has that maximum. There’s no reason this can’t work.

It seems the limitation is the SQL standardisation body. I guess someone could always implement a work around, create a plugin for opensource SQL query tools, as well as opensource functions to convert SQL+ [with such abilities as introduced above] to SQL.

(By the way I have by no means completely thought out all the above, but I hope it describes the spirit of my frustrations and of the possible solution – I hope some RDBMS experts can comment on this dilemma)

SQL-like like in C#

Sometimes you need to build dynamic LINQ queries, and that’s when the Dynamic Query Library (download) comes in handy. With this library you can build a where clause using BOTH SQL and C# syntax. Except for one annoying problem. Like isn’t supported.

When using pure LINQ to build a static query, you can use SqlMethods.Like. But you will find that this only works when querying a SQL dataset. It doesn’t work for local collections – there’s no C# implementation.

My Solution

So I mocked up a quick and dirty like method which would only support a single % wildcard, no escape characters and no _ placeholder. It did the job, but with so many people asking for a solution which mimics like, I thought I’d make one myself and publish it Public Domain-like.

It features:

  • Wildcard is fully supported
  • Placeholder is fully supported
  • Escape characters are fully supported
  • Replaceable tokens – you can change the wildcard (%), placeholder (_) and escape (!) tokens, when you call the function
  • Unit Tested


Adding like support to the Dynamic Query Library – Dynamic.cs

I also modified the Dynamic Query Library, to support like statements, leveraging the new function. Here are the steps required to add support yourself:

1. Add the Like value into the ExpressionParser.TokenID enum


2. Add the == TokenId.Like clause as shown below into ExpressionParser.ParseComparison()

        Expression ParseComparison() {
            Expression left = ParseAdditive();
            while ( == TokenId.Equal || == TokenId.DoubleEqual ||
       == TokenId.ExclamationEqual || == TokenId.LessGreater ||
       == TokenId.GreaterThan || == TokenId.GreaterThanEqual ||
       == TokenId.LessThan || == TokenId.LessThanEqual ||
       == TokenId.Like) {

3. Add the TokenID.Like case as shown below into the switch found at the bottom of the ExpressionParser.ParseComparison() function

                    case TokenId.LessThanEqual:
                        left = GenerateLessThanEqual(left, right);
                    case TokenId.Like:
                        left = GenerateLike(left, right);

4. Add the following inside the ExpressionParser class (the SQLMethods class need to be accessible, referenced library, or copied source code, using for appropriate namespace)

        Expression GenerateLike(Expression left, Expression right)
            if (left.Type != typeof(string))
                throw new Exception("Only strings supported by like operand");

            return IsLike(left, right);

        static MethodInfo IsLikeMethodInfo = null;
        static Expression IsLike(Expression left, Expression right)
            if (IsLikeMethodInfo == null)
                IsLikeMethodInfo = typeof(SQLMethods).GetMethod("EvaluateIsLike", new Type[] { typeof(string), typeof(string) });
            return Expression.Call(IsLikeMethodInfo, left, right);

5. Change the start of the default switch option according to the code shown in ExpressionParser.NextToken()

                    if (Char.IsLetter(ch) || ch == '@' || ch == '_') {
                        do {
                        } while (Char.IsLetterOrDigit(ch) || ch == '_');

                        string checktext = text.Substring(tokenPos, textPos - tokenPos).ToLower();
                        if (checktext == "like")
                            t = TokenId.Like;
                            t = TokenId.Identifier;


I use this in my own business system, but I preprocess the LIKE rule, as I have quite a few “AI” rules for bank transaction matching. (You can use like statements directly)

There are many ways to cache, here is how I cache a predicate, looping over the set of AIRules in my DB:

RuleCache[i].PreProcessedPredicate = DynamicQueryable.PreProcessPredicate<vwBankTransaction>(RuleCache[i].Filter); //Change the textbased predicate into a LambdaExpression

And then here is how I use it, looping over the array of cached rules:

bool MatchesRule = DynamicQueryable.Where(x.AsQueryable(), RuleCache[i].PreProcessedPredicate).Any(); //Run the rule

Where `x` is a generic list (not a db query), containing the one record I am checking. (Yes, it would be possible to loop over a larger set [of bank transactions], but I haven’t got around to such a performance improvement in my system – I haven’t noticed any performance issues – it’s not broken).

kick it on