Wednesday, October 30, 2013

AngularJS Unit testing - A helper for stubbing dependent methods that return promises

Deferreds and Promises have changed javascript development for the better. They offer an escape from callback hell. AngularJS supports a version of promises heavily inspired by the popular q library.

This is great however testing mocking out angularjs service dependencies which return promises was less great!

I want to test this:


Incidentally this kind of pattern is becoming more common now promise unwrapping is gone.

What I wanted:
  • A way to easily mock the "fetchData" service method
  • Capability to assert "fetchData" was called the with right arguments ( using jasmine spies)
  • The mock "fetchData" should return a promise so I can call "then" and use utilities such as $q.all
  • As little boiler plate code as possible
So I wrote this little helper:



And here is what the full test looks like:

Here is a full working jsfiddle

There's potential to expand this out having more functionality such as dealing with failing promises and giving more control over when the promise is resolved etc. If you think that's a good idea let me know in the comments.




Tuesday, December 4, 2012

The Rise of In-Browser Analytics

In-browser analytics with large client-side datasets are now feasible and will form an important new trend for the web. 

Typically now webpages load and display tens of records at a time. I believe loading many thousands of records at a time will become more and more common with the aggregation and filtering then performed in the browser.

The Current Situation

Many websites have consist of simple drilldowns. For example on my online banking site I see a page with a summary of my recent recent transactions and links to load a new page with more detailed information. e.g. summary page -> individual statement page -> individual transaction page

More recently there has been a growing trend for single page applications - however the underlying services often still reflect the old model. For example a RESTful api to the banking example could consist of  linked summary resources, statement resources and transactions resources with each drilldown causing another roundtrip to the browser.

This simple drilldown approach often makes sense and will never be fully replaced. However I believe there is an alternative approach that is beginning to make more sense - load all the data and perform the necessary aggregations and display filtering on the client. For example this crossfilter demo loads 250,000 rows of data and allows interactive filtering and drill down with instantaneous results once the data has been loaded.

Why Change?

The simple answer is that in-browser analytics with large client-side datasets can produce better a user experience.

In my online banking example if all transactions were loaded on the client then monthly statements are just an arbitrary choice. If I want to see what transactions I made on 10 day vacation I can do it instantly. Likewise how much money have ever spent in Starbucks - a near instant result is available.

In short - by having all the data available in-browser, all interactions with the data can become quicker and certain interactions that were previously impossible become possible. Consider watching your data dynamically change when moving a slider over a date range in comparison to the typical interaction of choosing a date and pressing submit. 

Why Now?

Until now In-browser Analytics had many technical obstacles however a number of changes have now made this approach viable:

  • Increased Connection Speeds. Downloading a Megabyte of data can be completed in seconds on broadband connections. A single megabyte of compressed data can store hundreds of thousands of rows of data. Utilising Local Storage with incremental updates can also help reduce load-times (and provide an offline capability for mobile users). In addition, from personal experience users are often happy to accept an initial few seconds delay if they have the knowledge that all interactions are lightening fast afterwards.
  • Increased Javascript performance. The ongoing browser speed wars have brought about huge speedups - http://whyeye.org/browsers/history-of-javascript-performance-chrome/ This is in addition to gains due to Moore's law. Increased use of HTML5's Webworkers will even allow web-apps to start exploiting multiple cores. In short - modern browsers are now really, really fast.
  • Better Browser Visualisation. Frameworks like D3, Highcharts, Raphael can now routinely handle very large datasets see: http://www.highcharts.com/stock/demo/data-grouping.
  • Better Rich Browser Applications. The filtering and display of large datasets requires complex performant UI frameworks. In addition loading a large dataset only really makes sense if you are going to perform multiple interactions on it - hence the need for a single page app. The recent explosion of rich JavaScript application frameworks has provided the necessary capabilities here - http://blog.stevensanderson.com/2012/08/01/rich-javascript-applications-the-seven-frameworks-throne-of-js-2012/
  • Better Data Handling. Traditionally all heavy analytics were performed server side and the results exposed with web services. Now there is a growing ecosystem of frameworks to handle large datasets client side. See crossfilter, gauss and jstat
Why Haven't We Seen More of This?


This approach is currently quite niche. Many of the components required are still relatively immature and limited to modern browsers and fast connections. Therefore I think In-Browser Analytics may initially find its most fertile ground for internal webapps. In my current role I've focussed on creating analytics websites for internal use only.  Due to a managed browser ecosystem (Chrome only) and fast internal networks handling large datasets client-side is made a lot easier.

There is also a mindset change required. Try telling your fellow developers that your webapp loads a whole years worth of financial records (over 200,000 rows) and then performs statistical analysis on it in JavaScript . This will lead to lots of raised eyebrows. The users don't care though - they just appreciate the fast interface.

The approach is also best suited to datasets where some statistical summaries can be made. e.g. loading 50,000 book reviews in one go is unlikely to be useful as the user will never read the vast majority of them. This will limit its domain somewhat.

What Next?

Join the revolution! Build some In-Browser Analytic Applications ( IBAA? - this really needs a better acronym...)

Monday, September 24, 2012

The Reason I'll never use MongoDB again

MongoDB has been getting lots of mixed reviews on Hacker News.

http://pastebin.com/raw.php?i=FD3xe6Jt

http://diegobasch.com/ill-give-mongodb-another-try-in-ten-years

http://svs.io/post/31724990463/why-i-migrated-away-from-mongodb

I also have had a particularly poor experience with MongoDB...

A few months ago, I met some nice people from 10Gen. They gave me a a MongoDB mug.

Now after moderate use with a high quality brand of tea (http://www.barrystea.ie/) and a industry standard version of a dishwasher (http://www.siemens-home.co.uk/our-products/dishwashers.html) there have been some disturbing results:


The data stored on this mug has started to fade away badly.  If this is what happens to the data I can see - what about all the data in the database? Worse again - no errors were thrown (I'm running in default mode)

Fearing the worst I migrated our corporate MongoDB instance to a etch-a-sketch stored in a vibration proof room.

I know I could have raised a bug about this or tried to get some support but a long blog post and a quick over-reaction is a more pragmatic approach. At the very least 10Gen could send me a new mug.

So if you are still using MongoDB - YOU HAVE BEEN WARNED...

Thursday, January 20, 2011

The Artifact per Environment Anti-Pattern

The Problem

How to handle environment specific properties is a topic that seems to generate much discussion.

While initially this may seem a trivial problem as requirements grow the complexity of the solutions will also. Here are some of the requirements you might have to tackle:
  • Overriding default properties
  • Dealing with properties that need to kept secret to the development team e.g. production passwords 
  • Properties that need to be changed dynamically (tuned) while the app is running e.g. cache configuration
  • The addition of new environments after the original artifact has been built
I've seen countless approaches to the problem - properties in the jndi tree, in the database, good old property files, pojo's etc. Each has its own pro's and con's and can range from simple implementations to hugely complex frameworks.

Not the Solution

One solution that is often suggested is rebuilding the artifact per environment and "baking" in the environment specific properties. This pattern very standard in Grails (http://www.grails.org/Environments) and often recommended for Maven (http://www.sonatype.com/books/maven-book/reference/profiles-sect-tips-tricks.html)

Now I'm sure there are many situations where this works perfectly well - a blunt axe can still chop a tree down!  But, I believe that for many reasons its sub-optimal.. 

The Downsides an Artifact per Environment:
  • Inefficient - the simple fact that do you have to rebuild the artifact for each environment is time consuming and involves repetition of effort
  • Risky - After testing your artifact in your test environment you are then going to deploy a different artefact that was never tested to production
  • Unnecessary Builds - If the properties change after you build the artifact - you need to build it again! 
  • Hard to diagnose problems - did some deploy the test war to production by accident? Get used to having to crack open jar's and war's to check the config.
  • Not compatible with Maven repositories - Maven repositories work best with one version of an artifact per version number - e.g. there's only one Apache commons-collections 3.2.1. An artefact per environment breaks this.
The Alternatives:

So what to do instead? Generally the best way to go is to externalise your properties from the artefact and then have it choose the correct properties at runtime. This is too general an area for one prescriptive approach but hopefully this article has ruled out building an artifact per environment.

Links:

Some ways to load properties from locations dependent on system variables in Spring 

Externalising Grails config

Sunday, December 5, 2010

Getting Flash 10.2 beta GPU acceleration working on Ubuntu 10.10 on an Acer Revo

Unloading the video rendering to the GPU is one of the main reasons a cheap low-powered box like the Revo can function as a HD video source.
This has been supported in lots of ways previously (vlc etc.) but Flash support was always a bit of a let down.
But Flash 10.2 beta changes that. It unloads almost all the work to the GPU leaving the CPU usage very low. The difference is startling - my Revo can now play full 1080p flash video with ease.
Here's how I set it up:
The full way to get flash using on an Acer Revo on ubuntu 10.10:
Install the new flash 10.2 beta:
wget http://download.macromedia.com/pub/labs/flashplayer10/flashplayer10_2_p2_32bit_linux_111710.tar.gz
tar zxvf flashplayer10_2_p2_32bit_linux_111710.tar.gz
sudo cp libflashplayer.so /usr/lib/flashplugin-installer/
You may need to also replace flash in other locations if you have installed in multiple places (try using 'locate libflashplayer.so' for a complete list)
Then for gpu support you also need:
sudo apt-get install libvdpau1
If you also want it working in chrome try:
To verify it is working go to a high-def you-tube video and play it. While it is playing right click and select - video info. You should see "Accelerated Video Rendering" if it has worked.
These instructions worked on my Acer Revo with Ubuntu 10.10.
Links:
My answer (and question!) here: 
This helpful thread:

Tuesday, October 5, 2010

How to build a one click deployment job with Hudson

While there are many solutions available for automated deployment, none of them seemed quite right for our needs. After one failed attempt at a homegrown solution our team went back to the drawing board and specified our requirements:

The ideal solution had to be:

  • Easy to Use - with an audience ranging from developers to business analysts the solution needed to be intuitive and easy to explain.
  • Modular - Things change if we changed app server or SCM we wanted only minimal changes to the tool. This also useful for testing parts of the process in isolation.
  • Not tied to the build tool or process of the deployable. Once I've built the artifact, I don't want anything to do with its build config - that's right I don't want to look at its pom or build.xml. When working with multiple teams with differing build approaches, this is a must.
  • One stop shop - We want the ability to deploy any of our artifacts from one place.
  • No Magic - The worst thing that can happen with an automated approach is that when it fails no-one knows why. We wanted know what was happening during all steps of the process.
  • Auditable - We needed to know who deployed what, to where and when
  • Secure - Can I restrict who deploys what to where?
  • Simple - We're lazy and code costs - the solution should contain as little custom code as possible.
  • Repeatable - The tool should deploy any artifact to anywhere but some deployments are used over and over again e.g. latest snapshot to the systest environment. Repeating these builds should be simple.

As with most things there is no silver bullet but the approach below worked very well for us.

Step 1. Build a simple solution you can call from the command line

Build the a bare minimum solution that works from the command line. Don't think about GUI's, security or anything else. Do think about what the minimum set of inputs are. e.g. enough to identify the deployable and the target server, no more and no less. For us this consisted of three very simple Groovy scripts:

  • The first simply took the artifact id and version, constructed the correct url to our Maven Repository and downloaded the correct deployable. 
  • The second script pushed the deployable to the remote server. 
  • The third took the deployable and ran the actual deployment to the app server. 
Each step was extremely simple, logged exactly what it was doing and could be tested in isolation. We then wrapped the three scripts in one master script that called each in turn. Through this approach we now had the means to deploy any artifacts in our maven repository to any of our servers. By using a command line compatible approach you leave your options wide open for calling the job from almost anywhere. Tools to help with this step come included with most Application servers or see Cargo for a catch all solution.

Step 2. Add the command line solution as a parameterised job in Hudson

Hudson already provides great security and auditing abilities and it seemed like a natural fit. We used a parametrised build to pass the necessary parameters to the scripts: http://wiki.hudson-ci.org/display/HUDSON/Parameterized+Build

We used drop-down fields for target servers and group id's leaving the artifact id and version as free text fields. Turning on Hudson security ensures an audit trail and if necessary the job can be locked down to certain users only. Hudson also gives you a simple gui that anyone can use.

Step 3. Enhance the Hudson Job with a Groovy Post Build step.

Secret sauce... While you now have a fully working solution there are some things that can be improved. I had promised one click deployments and also the Hudson gui lacks in some areas -To improve this use the Hudson Groovy Post build plugin: http://wiki.hudson-ci.org/display/HUDSON/Groovy+Postbuild+Plugin

Configure a post build step to add a badge to each build detailing what was deployed, by whom and to where. Then you using the passed in parameters you can construct a hudson url that will repeat the same deployment job again. e.g. http://myhudson-server/job/myDeploymentJob/buildWithParameters?Artifact=website2&Version=1.5-SNAPSHOT&TargetServer=SystemTestBox4 Add this url to the build badge too. By creating bookmarks with this url you now have truly "one click" deployments. These url's can also be easily sent non-technical users - e.g. by providing to the test team they can pull down the latest build for testing whenever is convenient.

Conclusion:

After a lot of thought, I feel we have come up with a solution that ticks all the boxes. It works well for us and after over 1000 deployments to both test and production systems I'm happy to say it's a huge success. Good luck with your approach and let me know how it goes in the comments.

Sunday, October 4, 2009

Mockito 1.8 - new useful features

I was once a happy EasyMock user. If asked, I think I would have even questioned the need for a new mocking framework – EasyMock did it all, didn’t it?
But after using Mockito for a short while I was impressed by its efficiency and ease of use. The key features for me were its simple and intuitive API and the way any Mock object will return sensible defaults for all of its methods – allowing you to concentrate on the behaviour of the methods you care about.
The framework continues to improve incrementally with each release. This post gives a quick view of two of the most useful new features in Mockito 1.8

BDD-Style Language Supported Natively

BDD (Behaviour Driven Development) is rapidly becoming mainstream. Standard BDD conventions recommend the use of given-when-then comments in your tests. This allows improved readability by giving clear delineation between test setup, execution and assertions.
Mockito’s choice of method names previously were in conflict with these BDD conventions. Version 1.8 allows you to stick with the BDD conventions while also remaining backward compatible will older Mockito tests.
   1: import static org.mockito.BDDMockito.*;
   2:  
   3: Seller seller = mock(Seller.class);
   4: Shop shop = new Shop(seller);
   5:  
   6: public void shouldBuyBread() throws Exception {
   7:   //given  
   8:   given(seller.askForBread()).willReturn(new Bread());
   9:   
  10:   //when
  11:   Goods goods = shop.buyBread();
  12:   
  13:   //then
  14:   assertThat(goods, containBread());
  15: } 



Capturing arguments


Mockito has always supported a wide range of matchers to allow verification that mocked methods are invoked with the expected arguments. However sometimes (especially when dealing with legacy code) this can limit the checks you can do without the overhead of writing a custom matcher. 


Version 1.8 introduces new functionality to allow you to capture and store the arguments passed the mocked methods. Standard jUnit assertions can then be applied to the captured arguments. Over reliance on capturing arguments would be a code smell in my opinion as most well abstracted code should not need to do this. However for testing legacy code and interactions with outside systems ArgumentCaptors can be very useful.



   1: ArgumentCaptor<Person> argument = ArgumentCaptor.forClass(Person.class);
   2:    verify(mock).doSomething(argument.capture());
   3:    assertEquals("John", argument.getValue().getName())


More Info…



Downloads and more information can be found at mockito.org. For maven users – simply add the following dependency to your pom:



   1: <groupId>org.mockito</groupId>
   2: <artifactId>mockito-all</artifactId>
   3: <version>1.8.0</version>