Don’t use Resource Strings with C#

I recommend separate static class files with static read-only strings.

Problems:

It’s better when strings are in code-files, with multi-line strings using @”” or $@””. Maybe append “Resources” at the end of the class name, as a convention.

Benefits:

  • You can use any coding techniques with them
  • You can be more cohesive, create multiple separate classes
  • You can have static functions to take and apply parameters
  • You use the normal text editor
  • You can press F12 on a reference, and get directly to editing the string
  • No XML to deal with – less merge conflicts

 

Why did Open Source Bounties Fail?

I’m shocked. I thought Bounties would supercharge Open Source development. You were the chosen one! (cringe)

So today, I wanted to post a bounty for Stasher. I did so on BountySource, but then I realised it was broken and abandoned. I looked further afield and it’s the same story, a digital landscape littered with failures.

Bounty Source

Is one of the better ones, limping along. They need a serious financial backer to grow their community faster.

  1. They seem to have a lot of server issues. Have a look at their recent twitter feed [https://twitter.com/Bountysource]
  2. When I posted my bounty, I did expect a tweet to go out from their account (as per my $20 add-on). Nothing. Either that subsystem is broken, or it has never been automated.
  3. Bounty Search is broken – “Internal server error.” in the console log.
  4. We know what I think about good security architecture. If people can’t talk about security correctly, it doesn’t matter if they know about bcrypt, but can they properly wield its power?
  5. No updates on Press since 2014

Freedom Sponsors

They don’t have enough of a profile, to excite me about their future. This has been executed on a shoe-string budget apparently. (I’ll try posting a bounty here if the Bounty Source one lapses)

  1. Only 12 bounties posted this year (Jan-Nov) – only 4 of those have workers, 2 of those look inactive. But at least search works.
  2. Their last Tweet was 2012

Others

http://bountyoss.com/, http://cofundos.com/ are down.

Analysis

This shouldn’t have happened. It failed because these startups ran out of cash and motivation.

There is massive potential here. So far we’ve seen MySpace, we need Facebook execution. And whoever does this, needs a good financial backer with connections to help grow the community.

I hope to see an open source foundation, maybe Linux Foundation, buy Bounty Source.

Stasher – File Sharing with Customer Service

(This is quite a technical software article, written with software coders in mind)

It’s time for a new file sharing protocol. P2P in general is no longer relevant as a concept, and central filesharing sites show that consumers are happy with centralised systems with a web interface. I think I have a good idea for the next incremental step, but first some history.

It’s interesting that P2P has died down so much. There was Napster and other successes which followed, but BitTorrent seems to have ruled them all. File discovery was lost, and with Universal Plug and Play a big security concern, even re-uploading is not on by default.

P2P is no longer needed. It was so valuable before, because it distributed the upload bandwidth, and also anonymised somewhat. But bandwidth continues to fall in price. MegaUpload and other like it were actually the next generation, and added some customer service around the management of the files, and charged for premium service. Dropbox and others have sort of carved out even more again.

Stash (which is hopefully not trademarked), is my concept to bring back discovery. It’s a different world, where many use VPNs and even Tor, so we don’t need to worry about security and anonymity.

It’s so simple, it’s easy to trust. With only a few hundred lines of code in a single file, one can compile their own, on Windows in seconds. So there can be no hidden backdoors. Those users who can’t be bothered with that, can download the application from a trusted source.

It works by being ridiculously simple. A dumb application which runs on your computer, is set-up to point to one or more servers. It only operates on one folder, the one it resides in. From there the servers control Stasher. A client can do any of the following, and can ban a server from doing a particular action.

And that’s it. It’s so basic, you should never have to update the client. New features should be resisted. Thumbnails should be generated on the server – because there is time and bandwidth to simply get the whole file.

All with varying software on the server, but the same Stash client. There is no direct P2P, however several servers can coordinate, such that a controller server can ask a client to upload to another specific server. Such a service can pre-package the Stash client with specific servers. Then throughout the lifetime, the client server list can be updated with new servers.

I’m thinking of building this, but I’m in no rush. I’ll make it open source. Can you think of any other applications for such a general-purpose file sharing framework?

For more information, see https://bitbucket.org/merarischroeder/stasher/wiki/Home

Appendix

Security measures ideas:

  • [Future] Code Virtual Machine
    • Only System and VM namespaces used
    • VM namespace is a separate small DLL which interacts with the system { Files, Network, System Info }
    • It’s easier to verify that the VM component is safe in manual review.
    • It’s easy to automatically ensure the application is safe
    • Only relevant for feature-extended client, which will span multiple files and more
  • [Future] Security analyser works by decompiling the software – ideally a separate project

Remaining problems/opportunities:

  • Credit – who created that original photo showing on my desktop? They should get some sort of community credit, the more votes they get. Need some sort of separate/isolated server which takes a hash and signs/stores it with datetime and potentially also with extra meta-data such as author-name/alias
    • Reviewers, while not as important should also be able to have their work registered somewhere. If they review 1000 desktop backgrounds, that’s time. Flickr for example could make a backup of such credit. Their version of the ledger could be signed and dated by a similar process.
  • Executable files and malware – 
    • AntiVirus software on the client
    • Trusting that the server makes such checks – eg. looking inside non-executables even for payloads. ie. image file tails.
  • Hacked controller
    • File filters on the client to only allow certain file types (to exclude executable files) – { File extensions, Header Bytes }
    • HoneyPot Clients – which monitor activity, to detect changes in behavior of particular controllers
    • Human operator of controller types in a password periodically to assure that it’s still under their control. Message = UTCTimestamp + PrivateKeyEncrypt(UTCTimestamp), which is stored in memory.

Try Scope Catch Callback [TSCC] for ES6

So it has started, it wasn’t a hollow thought bubble, I have started the adventure beyond the C# nest [http://blog.alivate.com.au/leave-c-sharp/]. It will take a while, because I still have a lot of software that still runs on C#, and I do still like the language, but all new development will be on ES6 and NodeJS.

So I’m going to record my outlook over a few blog posts. I re-discovered Cloud9 IDE, and I’ve got a few thoughts on architecture and a new feature for ES6.

Today, I’ll tell the world about my proposed ES6 enhancement.

Despite the ECMAScript committee stating they are about “Standards at Internet Speed”, there isn’t much Internet tooling in there to make it happen. They have certainly been successful making rapid progress, but where does one submit an idea to the committee? There’s not even an email link. I’m certainly not going to cough up around $100k AUD to become a full member. [Update: They use GitHub, a link on their main website to this would be great. Also check out: https://twitter.com/ECMAScript]

So I’ll be satisfied to just put my first ES6 idea here.

Try blocks don’t work in a callback world. I’m sure there are libraries which could make this nicer. In C# Try blocks do work with the async language features for instance.

So here is some code which won’t catch an error

try
{
    $http.get(url).then((r) => {
        handleResponse(r);
    });
}
catch (e)
{
    console.log(e);
}

In this example, if there is an error during the HTTP request, it will go uncaught.

That was simple, though. How about a more complex situation?

function commonError(e) {
    console.log(e);
}

try
{
    runSQL(qry1, (result) => {
        doSomethingWith(result);
        runSQL(qry2, (result) => {
            doSomethingWith(result);
        }, commonError)
    },commonError);
}
catch (e)
{
    commonError(e);
}

Callback nesting isn’t very nice. This is why `await` is pushed forward as a good candidate. But what if the API you target doesn’t implement Promise? What if you only sometimes define a try block?

My proposal is to supply a method which gets the Try Scope Catch Callback [TSCC]. If you don’t return a promise, it would be like this:

function get(url, then, error) {
  var error | window.callback.getTryScopeCatchCallback(); //TSCC

  //error occurs:
  error(e); 

  //This could be reacting another 
  //try/catch block or as a result 
  //of callback from another error method
}

Promises already have a catch function in ES6. They’re so close! A Promise should direct its the error/catch callback to the TSCC by default. If the Promise spec was updated to include this, my first example of code above would have caught the error with no changes in code.

So what do you think ECMA members, can we get this into ECMAScript?

Feedback log – from es-discuss@mozilla.org maillist

  • kdex

Why not just transform callback-based APIs into `Promise`s and use (presumably ES2017)
`await`/`async` (which *does* support `try`/`catch`)?

e. g.:
“`js
try {
await curl(“example.com“);
/* success */
}
catch (e) {
/* error */
}

  • My response

1. Whether you await or not, the try scope’s catch callback [TSCC] should still be captured.

2. If there is no use of Promise (for coders own design reasons) the try scope’s catch callback [TSCC] should be available

Why I want to leave C#

Startup performance is atrocious, critically, that slows down development. It’s slow to get the first page of a web application, navigating to whole new sections, and worst: initial Entity Framework LINQ queries.

It’s 2016, .Net is very mature but this problem persists. I love the C# language much more above Java, but when it comes to the crunch, the run-time performance is critical. Yes I was speaking of startup performance, but you encounter that in new areas of the software warming up and also when the AppPool is recycled (scheduled every 13 hours by default). Customers see that most, but it’s developers who must test and retest.

It wastes customers and developers time. Time means money but the hidden loss is focus. You finally get focused to work on a task, but then have to wait 30 seconds for an ASP.NET web page to load up so you can test something different. Even stopping your Debugging in VS can take 10s of seconds!

There are told ways to minimise such warmup problems, with native generation and EF query caching. Neither are a complete solution. And why workaround a problem not experienced in node.js and even PHP!

.Net and C# are primarily for business applications. So how important is it to optimise a loop over millions of records (for big data and science) over the user and developer experience of run and start with no delay?

Although I have been critical of Javascript as a language, recent optimisation are admirable. It has been optimised with priority for first-use speed, and critical sections are optimised as needed.

So unless Microsoft fix this problem once and for all, without requiring developers to coerce workarounds, they’re going to find long term dedicated coders such as myself shifting to Javascript, especially now that ECMAScript and TypeScript make Javascript infinitely more palateable.

I have already recently jettisoned EF in favour of a proprietary solution which I plan to open source. I also have plans for node.js and even my own IDE which I plan to lease. I’m even thinking of leaving the Managed world altogether – Heresy!

.Net has lots going for it, it’s mature and stable, but that’s not enough anymore. Can it be saved? I’m not sure.

Lets leave Javascript behind

Disclaimer: I am sure Javascript will continue to be supported, and continue even to progress in features and support, regardless of any Managed Web. Some people simply love it, with all the flaws and pitfalls, like a sweet elderly married couple holding onto life for each other.

It’s great what the web industry is doing with ECMAScript, from version 6 we will finally see something resembling classes and modules. But isn’t that something the software industry have had for years? Why do we continue to handicap the web with an inferior language, when there have always been better options? Must we wait another 2-3 years before we get operator overloading in ECMAScript 7?

The .Net framework is a rich standardised framework with an Intermediate Language (IL). The compiler optimisations, toolset and importantly the security model, make it a vibrant and optimised ecosystem which could be leveraged. It could have been leveraged years ago with a bare minimum Mono CLR.

Google Chrome supports native code, however it runs in a separate process and calls to the DOM must be marshalled through inter-process communication methods. This is not ideal. If the native code support was in the same process it would be a good foundation for Mono.

I believe it is possible, perhaps even trivial, to achieve this nirvana of a Managed Web. We just need to take small considered steps to get there, so here’s my plan.

  1. Simple native code in the same process – Javascript is currently executed on the main thread, presumably through the window message pump executing delegates. These delegates can simply forward to managed function delegates. But first we should be able to trigger an alert window through native code which is compiled inside the Google Chrome code base.
  2. Simple mono support – Fire up Mono, provide enough support in a Base Class Library (BCL) for triggering an alert. This time there will be an IL DLL with a class which implements an Interface for start-up.
  3. Fuller API – With the simple milestones above completed, a complete BCL API can be designed and implemented.
  4. Optimisations – For example, enumerating the DOM may be slowed by crossing the Managed/Unmanaged boundary? jQuery-like functions could be implemented in native code and exposed through the BCL.

Along the way, other stacks and browsers could also leverage our work, establishing support for at least Java as well.

Example API:

IStartup

  • void Start(IWindow window) – Called when the applet is first loaded, just like when Javascript is first loaded (For javascript there isn’t an event, it simply starts executing the script from the first line)

IWindow
see http://www.w3schools.com/jsref/obj_window.asp

IDocument
see http://www.w3schools.com/jsref/dom_obj_document.asp

Warm up – Possible disadvantage

Javascript can be interpreted straight away, and there are several levels of optimisation applied only where needed, favouring fast execution time. IL would need to be JIT’d which would be a relatively slow process, but there’s no reason why it cannot be AOT compiled by the web server. Still I see this as the biggest disadvantage that needs to be front of mind.

Other people around the web who want this

http://tirania.org/blog/archive/2012/Sep-06.html

 

Enhanced by Zemanta

Inverse Templates

Hackathon project – Coming soon….

[Start Brief]
Writing open source software is fun, but to get recognition and feedback you need to finish and promote it. Todd, founder of Alivate, has completed most of the initial parts of a new open source project “Inverse Templates”, including most of the content below, and will work with this week’s hackathon group to publish it as an isolated open source project, and NuGet package.

Skills to learn: Code Templating, Code Repositories, NuGet Packages, Lambda, Text Parsing.
Who: Anyone from High School and up is encouraged to come.

We will also be able to discuss future hackathon topics and schedule. Don’t forget to invite all of your hacker friends!

Yes, there will be Coke and Pizza, donated by Alivate.
[End Brief]

The Problem

Many template engines re-invent the wheel. Supporting looping logic, sub-template and many other features. Any control code is also awkward, and extensive use makes template files look confusing to first-time-users.

So why have yet another template engine, when instead you can simply leverage the coding language of your choice, and your skills and experience hard fought?

The Solution

Normal template engines have output content (HTML for example) as the first class citizen, with variables and control code being second class. Inverse Template systems are about reversing this. By using the block commenting feature, of at least C-Like languages, Inverse Template systems let you leverage the full power of your programming language.

At the moment we only have a library for C# Inverse Templates. (Search for the NuGet Package, or Download and reference the latest stable DLL)

Need a loop? Then use a for, foreach, while, and more.
Sub-templating? Call a function, whether it’s in the same code-file, in another object, static or something more exotic.

Introductory Examples

Example T4:

Introductions:
<# foreach (var Person in People) { #>
Hello <#= Person.Name #>, great to iterate you!
<# } #>

Example Inverse Template:

/*Introductions:*/
foreach (var Person in People) {
/*
Hello */w(Person.Name);/*, great to iterate you!*/
}

As you can see, we have a function named w, which simply writes to the output file. More functions are defined for tabbing, and being an Inverse Template you can inherit the InverseTemplate object and extend as you need! These functions are named with a single character, so they aren’t too imposing.

Pre-Processing
As with T4 pre-processing, Inverse Template files are also pre-processed, converting comment blocks into code, then saved as a new file which can be compiled and debugged. Pre-processing as opposed to interpreted templates are required, as we rely on the compiler to compile the control code. Furthermore, there are performance benefits to pre-processed (and compiled) templates, as opposed to interpreted.

Example pre-processed Inverse Template:

l("Introductions:");
foreach (var Person in People) {
  n("");
  t("Hello ");w(Person.Name);
}

Function l, will output any tabbing, then content, then line-ending
Function n, will output the content followed by line-ending
Function t, will prefix any tabbing followed by content

The pre-processor will find and process all files in a given folder hierarchy ending with “.ct.cs”. The pre-processor is an external console application, so that it will even work with Express editions of Visual Studio.

You should:

  • Put all of your Definitions into folder .\InverseTemplates\Definitions\, sub-folders are ok
  • Actively exclude and then re-include the generated .\InverseTemplates\Processed\, folder after pre-processing
  • Exclude the Definitions folder before you compile/run your project

Not the answer to all your problems

I’m not claiming the Inverse Templates are the ultimate solution. They’re simply not. If you have a content heavy templates with no control code and minimal variable merging, then perhaps you want to just use T4.

Also, you may find that you’re more comfortable using all of the InverseTemplate functions directly {l,n,w,t}, instead of using comment blocks. In some cases this can look more visually appealing, and then you can bypass the pre-processing step. This could be particularly true of templates where you have lots of control code and minimal content.

But then again, keep in mind that your code-editor will be able to display a different colour for comment blocks. And perhaps in the future your code-editor may support InverseTemplates using a different syntax highlighter inside your comment blocks.

For a lot of work I do, I’ll be using Inverse Templates. I will have the full power of the C# programming language, and won’t need to learn the syntax of another template engine.

I’m even thinking of using it as a dynamic rendering engine for web, but that’s more of a curiosity than anything.

Advanced Example – Difference between Function, Generate and FactoryGet

class TemplateA : InverseTemplate {
  public override Generate() {
    /*This will be output first, no line-break here.*/
    FunctionC(); //A simple function call, I suggest using these most often, mainly to simplify your cohesive template, when function re-use is unlikely.
    Generate(); //This is useful when there is some function re-use, or perhaps you want to contain your generation into specific files in a particular structure
    IMySpecial s = FactoryGet("TemplateD"); //This is useful for more advanced situations which require either a search by interface implementation, with optional selection of a specific implementation by class name.
    s.SpecificFunction("third");
  }
  private void FunctionC() {
    /*
    After a line-break, the is now the second line, with a line-break.
    */
  }
}
class TemplateB : InverseTemplate {
  public override Generate() {
    /*This will be the third line.*/
  }
}
interface IMySpecial
{
  void SpecificFunction(string SpecificParameter);
}
class TemplateD : InverseTemplate, IMySpecial
{
  public SpecificFunction(string SpecificParameter) {
    /* This will follow on from the the */w(SpecificParameter);/*line.
    */
  }
}
class TemplateF : InverseTemplate, IMySpecial
{
  //Just to illustrate that there could be multiple classes implementing the specialised interface
}

Advanced – Indenting

All indent is handled as spaces, and is tracked using a stack structure.

pushIndent(Amount) will increase the indent by the amount you specify, if no parameter is specified, the default is 4 spaces.
popIndent will pop the amount of indent on the stack, the last amount pushed.
withIndent(Amount, Action) will increase the indent only for the duration of the specified action.

Example:

withIndent(8, () => {
  /*This will be indented by 8 spaces.
  And so will this, on the next line.
  I recommend you only use this when calling a function.*/
});
/*This will not be indented.*/
/*Within a single function you should
    control your indent manually with spaces.*/
if (1 == 1) {
/*
    it will be easier to see compared to calls to any of the indent functions {pushIndent, withIndent, etc..}*/
  if (2 == 2) {
/*
    just keep your open-comment-block marker anchored in-line with the rest*/
  }
}

These are all the base strategies that I currently use across my Inverse Templates. I also inherit InverseTemplate and make use of the DataContext, but you’ll have to wait for another time before I explain that in more detail.

Enhanced by Zemanta

Windowing functions – Who Corrupted SQL?

I hate writing sub-queries, but I seem to hate windowing functions even more! Take the following:

select
PR.ProfileName,
(select max(Created) from Photos P where P.ProfileID = PR.ID) as LastPhotoDate
from Profiles PR

In this example, I want to list all Profile names, and also include a statistic of the most recent uploaded photo. It’s quite easy and looks a little bloated, but compared to windowing functions, it is slower. Let’s have a look at the more performant alternative:

select
PR.ProfileName,
max(Created) OVER (PARTITION BY PR.ID) as LastPhotoDate
from Profiles PR
join Photos P
on P.ProfileID = PR.ID

That’s actually quite clear (if you are used to using windowing functions) and performs better. But it’s still not ideal, coders now need to learn about OVER and PARTITION just to do something seemingly trivial. SQL has let us down. It looks like someone who creates RDBMS’s told the SQL comittee to add windowing functions to the SQL standard, it’s not user friendly at all, computers are supposed to do the hard work for us!

It should look like this:

select
PR.ProfileName,
max(Created)
from Photos P
join Profiles PR
on PR.ID= P.ProfileID
Group By PR.ID --or Group By PR.ProfileName

I don’t see any reason why an RDBMS cannot make this work. I know that if a person gave me this instruction and I had a database, I would have no trouble. Of course, if different partitioning is required within the query, then there is the option for windowing functions, but for the stock standard challenges, keep the SQL simple!

Now what happens when you get a more difficult situation? What if you want to return the most recently uploaded photo (or at least the ID of the photo)?

--Get each profiles' most recent photo
select
PR.ProfileName,
P.PhotoFileName,
P.PhotoBlob
from Photos P
join Profiles PR
on PR.ID= P.ProfileID
join (
select ProfileID, max(Created) as Created
from Photos
group by ProfileID
) X
on X.ProfileID = P.ProfileID
and X.Created = P.Created

It works but it’s awkward and has potential for performance problems. From my limited experience with windowing functions and short search on the web, I couldn’t find a windowing function solution. But again, there’s no reason an RDBMS can’t make it easy for us, and again the SQL language should make it easy for us!

Why can’t the SQL standards group innovate? Something like this:

select
ProfileName,
P.PhotoFileName,
P.PhotoBlob
from Photos P
join Profiles PR
on PR.ID= P.ProfileID
group by ProfileID
being max(Created) --reads: being a record which has the maximum created field value

And leave it to the RDBMS to decide how to make it work? In procedural coding with a set, while you are searching for a maximum value you can also store the entity which has that maximum. There’s no reason this can’t work.

It seems the limitation is the SQL standardisation body. I guess someone could always implement a work around, create a plugin for opensource SQL query tools, as well as opensource functions to convert SQL+ [with such abilities as introduced above] to SQL.

(By the way I have by no means completely thought out all the above, but I hope it describes the spirit of my frustrations and of the possible solution – I hope some RDBMS experts can comment on this dilemma)

SQL-like like in C#

Sometimes you need to build dynamic LINQ queries, and that’s when the Dynamic Query Library (download) comes in handy. With this library you can build a where clause using BOTH SQL and C# syntax. Except for one annoying problem. Like isn’t supported.

When using pure LINQ to build a static query, you can use SqlMethods.Like. But you will find that this only works when querying a SQL dataset. It doesn’t work for local collections – there’s no C# implementation.

My Solution

So I mocked up a quick and dirty like method which would only support a single % wildcard, no escape characters and no _ placeholder. It did the job, but with so many people asking for a solution which mimics like, I thought I’d make one myself and publish it Public Domain-like.

It features:

  • Wildcard is fully supported
  • Placeholder is fully supported
  • Escape characters are fully supported
  • Replaceable tokens – you can change the wildcard (%), placeholder (_) and escape (!) tokens, when you call the function
  • Unit Tested

Downloads:

Adding like support to the Dynamic Query Library – Dynamic.cs

I also modified the Dynamic Query Library, to support like statements, leveraging the new function. Here are the steps required to add support yourself:

1. Add the Like value into the ExpressionParser.TokenID enum

            DoubleBar,
            Like
        }

2. Add the token.id == TokenId.Like clause as shown below into ExpressionParser.ParseComparison()

        Expression ParseComparison() {
            Expression left = ParseAdditive();
            while (token.id == TokenId.Equal || token.id == TokenId.DoubleEqual ||
                token.id == TokenId.ExclamationEqual || token.id == TokenId.LessGreater ||
                token.id == TokenId.GreaterThan || token.id == TokenId.GreaterThanEqual ||
                token.id == TokenId.LessThan || token.id == TokenId.LessThanEqual ||
                token.id == TokenId.Like) {

3. Add the TokenID.Like case as shown below into the switch found at the bottom of the ExpressionParser.ParseComparison() function

                    case TokenId.LessThanEqual:
                        left = GenerateLessThanEqual(left, right);
                        break;
                    case TokenId.Like:
                        left = GenerateLike(left, right);
                        break;
                }

4. Add the following inside the ExpressionParser class (the SQLMethods class need to be accessible, referenced library, or copied source code, using for appropriate namespace)

        Expression GenerateLike(Expression left, Expression right)
        {
            if (left.Type != typeof(string))
                throw new Exception("Only strings supported by like operand");

            return IsLike(left, right);
        }

        static MethodInfo IsLikeMethodInfo = null;
        static Expression IsLike(Expression left, Expression right)
        {
            if (IsLikeMethodInfo == null)
                IsLikeMethodInfo = typeof(SQLMethods).GetMethod("EvaluateIsLike", new Type[] { typeof(string), typeof(string) });
            return Expression.Call(IsLikeMethodInfo, left, right);
        }

5. Change the start of the default switch option according to the code shown in ExpressionParser.NextToken()

                default:
                    if (Char.IsLetter(ch) || ch == '@' || ch == '_') {
                        do {
                            NextChar();
                        } while (Char.IsLetterOrDigit(ch) || ch == '_');

                        string checktext = text.Substring(tokenPos, textPos - tokenPos).ToLower();
                        if (checktext == "like")
                            t = TokenId.Like;
                        else
                            t = TokenId.Identifier;
                        break;
                    }

Example

I use this in my own business system, but I preprocess the LIKE rule, as I have quite a few “AI” rules for bank transaction matching. (You can use like statements directly)

There are many ways to cache, here is how I cache a predicate, looping over the set of AIRules in my DB:

RuleCache[i].PreProcessedPredicate = DynamicQueryable.PreProcessPredicate<vwBankTransaction>(RuleCache[i].Filter); //Change the textbased predicate into a LambdaExpression

And then here is how I use it, looping over the array of cached rules:

bool MatchesRule = DynamicQueryable.Where(x.AsQueryable(), RuleCache[i].PreProcessedPredicate).Any(); //Run the rule

Where `x` is a generic list (not a db query), containing the one record I am checking. (Yes, it would be possible to loop over a larger set [of bank transactions], but I haven’t got around to such a performance improvement in my system – I haven’t noticed any performance issues – it’s not broken).

kick it on DotNetKicks.com