Robert van Loghem

Voice navigated apps the next hype?

In the 1980′s there was a TV show called Knight Rider, where Michael Knight, a vigilante with his car K.I.T.T would fight bad guys. The thing that made this show special to me was the car. Mr Knight could talk to it, and it would understand what he said and meant and respond meaningfully. Sometimes throwing a witty remark in there. It gave the car, a personality, it was the co-star of the show.

Siri

Now in 2011, Apple released Siri. An assistant where you can ask certain things, like “What is the weather going to be like tomorrow”, and again, just like the car K.I.T.T., it responds with the correct information. In the above case, the weather for tomorrow based on your current location. If i ask Siri “What THE answer is?”, it sometimes responds with the number 42, this is for nerds and geeks a pretty witty answer (as it is THE answer to THE question in Hitchhikers guide to the galaxy). Thus Siri seems to me, have personality, it answers questions with a certain flavor. For me, in 2011, it was the very first time you could ask almost anything to a device (a mobile phone) and it would (try) to give a smart, witty answer.

Hype?

So is the fact that you are going to talk to your phone and it actually understands what you mean and then responds, something we’re going to see more of in the future or is it just a hype?

I think, it’s going to be huge and is here to stay. Here is an example why; Compare the following; How many touch-clicks and seconds does it take to create a new appointment for tomorrow with my dentist in my home town? By using touch/clicking on the interface it takes me about 30 seconds. When I ask the phone to make an appointment, it takes me 4 seconds. That is a lot faster. The only thing i needed to do is say the line; “Make an appointment with my dentist in my home town tomorrow at nine o’clock“. Whereas, when using touch, i needed to create a new appointment, type in the word “Dentist” and “home town“, set the time and then save.

There are other examples where voice is much faster then when using touch.
- Banking app – “Transfer 300 Euros from my savings account to my wife’s payment account“, 5 seconds for voice, >15 seconds for touch
- Home heating app – “I’ll be home 2 hours early, set the temperature to 19 degrees celsius“, 4 seconds for voice, >10 seconds for touch

Next big thing…

So why is it going to be the next thing in user interfacing? Because, next to the fact that it is more personal (the device can respond like a human would) and more natural to a person (you say things like you would say to another human), it is also a much faster way to interact with your device. While touch has made graphical user interfaces much simpler to use (even toddlers know how to swipe to the next photo). When using speech, a person can ask a complex thing to a mobile phone within a number of seconds and the device will execute the task. Google is also catching on and is launching Assistant, aka Siri for Android at the end of the year, meaning voice navigated apps are becoming mainstream.

In an upcoming post I’ll dive deeper into the details how you can actually start coding up your own voice navigated app using Advance speech recognition (ASR), natural language processing and text to speech (TTS) technologies. Because it might sound really simple, actually hearing and understanding what humans say is really hard.

Share

iOS + XCode 4 + GHUnit = Mobile TDD+Continuous testing part 2 of n

Last time I explained why I think doing TDD for mobile is imperative, and why I do it. But now it’s time to get technical, and explain to you how to set up, GHUnit in XCode 4 and run unit tests, not only in the iPhone and iPad simulator but also on your own physical device!, it’s in text and images but also in video form on YouTube.

Note, if you want to know why i chose GHUnit over OCUnit, just scroll down to the end of the post.

But wait….

Before I begin, I want to make one thing very clear, the difference between code unit testing and UI testing. Unfortunately, UI development can be hard to do in a TDD fashion. Especially when you want to test UI components. e.g. When I send a TouchEvent will the View respond and trigger my method in my controller.
My advice; don’t do UI testing with a Unit testing framework (OCUnit, JUnit, GHUnit), do it with for example the iOS automation API, which has been created specifically to test those UI components. I’ll also get back to you in a later post on how to do UI testing on Android.

What do you test with Unit testing frameworks? Well you test just the code, nothing more. Model and Controller code, not the View! You might need the help of a mocking framework to make it testable, because the view is missing and needs to be wired up for the controller and model to work properly, or it might need other controllers etc..

Let’s begin setting up!

With that in mind, let’s set up our own iPhone XCode 4 project! add GHunit, create a test and run it in the simulator or your own iOS device, iPhone, iPod Touch or iPad.

1. First of all download the GHUnitIOS version (0.4.28 at this time) at https://github.com/gabriel/gh-unit/archives/master
2. Unpack the downloaded zip file somewhere in your home directory, you should get the GHUnitIOS.framework directory

NOTE; I first placed it in /Developer/Library/Frameworks but XCode 4 didn’t like it and when compiling it could not find the header files, therefore I placed them somewhere in my home directory (e.g. /Users/rvanloghem/Development/Frameworks/GHUnitIOS.framework)

Right, you are now ready to set up your GHUnit ready XCode 4 project.

3. For example purposes, I’m choosing a normal, navigation-based application but you might have an existing project.
- NOTE; Don’t check the Include Unit Tests, because we are going to supply our own unit testing framework and not rely on OCUnit, which is supplied by default in XCode

4. In your XCode project settings (blue root icon in the tree browser) add a Target which you can call Tests, i usually base the project on a simple View-based Application (a Target named Tests will be added + a folder named Tests with all the Tests target files in them)

5. Next go back to the Tests target (click on the Tests Target) and add the GHUnitIOS.framework which you downloaded and unpacked in step 1 and 2. (click on the Build Phases tab, open up the Link Binary with Libraries, hit the + button, click Add Other and navigate+select the GHUnitIOS.framework directory on your filesystem)
6. Optional, but nice, move the GHUnitIOS.framework in your Tree to the Frameworks folder, to tidy things up

7. Set the -ObjC and -all_load in the other linker flags on the Tests Target (Select the Tests Target, select the Build Settings, search for other linker flags, and add the 2 flags)

8. Now you can delete some files which are not necessary, all the files in the Tests folder. (Note, Not! the Supporting Files folder as well, just the files!)
9. Delete the main.m file in the Supporting Files folder

10. In the Tests-Info.plist file (again, in the Supporting Files folder), clear out the Main nib file base name value

Time to create the GHUnit test runner, which will scan for our Unit Test Cases and run them.

11. Create an Objective-C class in the Tests folder named GHUnitIOSTestMain and make sure it is only! added to the Tests target

12. You can delete the GHUnitIOSTestMain.h file
13. Copy and paste source code from (http://github.com/gabriel/gh-unit/blob/master/Project-IPhone/GHUnitIOSTestMain.m) in your GHUnitIOSTestMain.m file

And now it’s time to create our own test case which will fail ;)

14. Again create an Objective-C class in the Tests folder and as this is an example, name it ExampleTest, also make sure it is added to the Tests Target only!
15. You can delete the ExampleTest.h file
15. Copy and paste the next piece of code which is 99% copy/pasted from the example code from GHUnit (http://gabriel.github.com/gh-unit/_examples.html)

// For iOS #import <GHUnitIOS/GHUnit.h> // For Mac OS X //#import <GHUnit/GHUnit.h> @interface ExampleTest : GHTestCase { } @end @implementation ExampleTest - (BOOL)shouldRunOnMainThread { // By default NO, but if you have a UI test or test dependent on running on the main thread return YES return NO; } - (void)setUpClass { // Run at start of all tests in the class } - (void)tearDownClass { // Run at end of all tests in the class } - (void)setUp { // Run before each test method } - (void)tearDown { // Run after each test method } - (void)testFoo { NSString *a = @"foo"; GHTestLog(@"I can log to the GHUnit test console: %@", a); // Assert a is not NULL, with no custom error description GHAssertNotNULL(a, nil); // Assert equal objects, add custom error description NSString *b = @"bar"; GHAssertEqualObjects(a, b, @"A custom error message. a should be equal to: %@.", b); } - (void)testBar { GHAssertTrue(TRUE, @"Yes it worked"); } @end

Right, ready to run!

16. Launch your Tests target (iTunes-like play button arrow) and run it against the simulator-scheme and your Unit test app should start

17. Now in the app in the simulator hit the blue run button and your tests will execute. (and testFoo will fail!, you can click on it to see why it failed)

And now it’s time to fix it…

18. Change the NSString *b = @"bar"; in the testFoo method to NSString *b = @"foo";
19. Run the Test app again and re-run the test and they should be green or in this case black, which means your tests are ok

Showing the power of GHUnit

20. Run the app against your own iOS device-scheme. (Select iOS-device-scheme and click the iTunes like run button)

Why i chose GHUnit over OCUnit

IMHO, this shows the real power of the GHUnit testing framework, not only does it run in the simulator but it also runs on your own iPhone, iPad, etc. Whereas OCUnit can only run as part of your build on your own machine, not on your phone and not in the simulator, this for me, is a big dealbreaker.
The closer you can get your unit tests running against a real-world environment the better it will be. Why? Because you are making use of the real devices processor (not the intel x86), real memory management, or lack there-of, the real API’s etc. If my Unit tests run on my phone, i’m 99.999% certain that the code under test will actually run on, yes, you guessed it, my phone.

There is of-course a downside to GHUnit, OCUnit (bundled with XCode) can be run automatically prior to you compiling your own app, this makes getting feedback about regression a lot faster, GHUnit is something which you have to run manually, in this case. But to solve that problem, or at least make it a whole lot better, we can use continuous integration aka build server to do the auto-running-of-unit-tests for us. There is a very nice blog post which compares various iOS unit testing frameworks.

So what is next? Well, the topic for my next blog and video in this series is hooking up a XCode project + GHUnit to Jenkins (or Hudson for the Oracle minded people out there).

Share

Why TDD+ Continuous testing is imperative for mobile apps (part 1 of n)

Since a couple of months I’ve been developing mobile applications, some are for the business at home (girlfriend-shopping-list app that actually works and augmented reality garden iPad app) and some are for work. I have experienced that TDD and Continuous testing (Test Driven Development) is a way of working that leads to fewer bugs and regression problems and better design in my software, it’s my preferred way of programming, not testing.

Mobile TDD is imperitive

And to start off, here’s how I benefit from doing TDD:

  1. Robust
  2. Better code design, no really!
  3. Find regression problems early on

and you can read more on TDD and Continuous testing here.

The thing is, writing a mobile app takes about 20% of the time it would take me to write a web+client+server based app. Which of-course is really nice, because I can write lots of apps. It also means that whenever i need to fix a bug or add new functionality, I need to have developed it in a TDD fashion, otherwise i cannot guarantee reliability. So let me explain why…

Previous experience

The last complex software piece I’ve worked on is a product called Deployit, which can deploy java applications to application servers, not just one, but many different type of servers and the deploy part is encapsulated in a piece called a plugin. For every plugin we have written lots and lots of tests and boy do they prove to be a lifesaver.

As a developer I’m not just working on a single plugin, I’m working on the entire product (partly, because we are a small team and i like to do so), that means that for a single week I’m adding new deployment logic to plugin A and next week fixing a bug in plugin B. I’m a good programmer with a good memory but working on so many plugins and quickly fixing problems or adding functionality is hard. I have to context switch between intricacies of plugin D and the complexity of plugin W, I am bound to make a mistake or two, which will delay the release or upset users because functionality that used to work is now broken.

So how does this compare with mobile apps? It’s simple, plugins, resemble for me, apps. In the mobile world, you are bound to work on multiple apps in a short amount of time, that means, you need help to keep those apps stable when you are working on them.

Translation of TDD+Continuous testing to mobile development

Did you ever update an app that you worked on 2 months ago and broke per accident some old functionality? Then you must have seen what happened with your user ratings, it went down, right? People expect functionality that have always worked to still keep working, otherwise you will get 1-star or worse, it gets deleted! New functionality can have some bugs, because it is new, existing functionality should just work, there is no excuse for it to not do so.

Unless your brain can keep track of what the code of every app that you might work on is doing, which I don’t think it can, if it would you would work at NASA on “Space Bus 3.0″. That is where your automated tests come in, they really help you, make a fix and guaranteeing that your old functionality does not break.

It also helps you make new functionality more robust, don’t just test the happy flow, also test the monkey flow, input some weird data, trigger some weird touch-events, see if you can make your app crash, and do this automated. Your app becomes much more robust and your users will love you for it.

What is next…

The above words are all fine, but how do we actually do this ? In the next couple of blog posts I will go into technical detail how to do;
1. Unit testing (cheap testing at code level, OCUnit, JUnit)
2. Continuous Integration (using Jenkins)
3. UI testing (not so cheap testing at the UI level, Automation API’s, mock framework)
and do this for iOS and Android platforms.

Share

Continuous deployment impressions at #JFall 2010, NL

Last week (november 3rd) Andrew Phillips and myself did a presentation on continuous deployment at the awful (so we thought) hour of 8 o’clock in the morning for the NLJUG. We only expected a handful of attendees but fortunately the day before we were told that we had moved to a bigger conference room because of the great number of people signing up! So at 8 o’clock we had about <100 people in the room! So, if you were there, thanks for coming so early, we really appreciate it. It of-course shows that continuous deployment is a hot topic ;)

Doing the continuous deployment talk at JFall

Doing the continuous deployment talk at JFall



So what did we actually talk about? Well…

  1. Where we are now, continuous build/integration + storing the results of a build into a big safe aka repository?! and not getting it to the user who is expecting a working application
  2. What the steps are of getting your application ready for continuous deployment
  3. How to do your continuous deployment.
  4. How to do a post-deployment test automatically after a successful deployment
  5. and the live demo of-course ;) By changing some code and checking it in via GIT, our continuous build server, started, building after it discovered the changed code, it kicked off Deployit (our deployment automation product) to do the deployment to a WebSphere Network Deployment Application Server + Apache HTTPD Server, finally Hudson, then triggered JMeter to test the deployed application, and we fed JMeter host information via Deployit, so it could connect to the right host!

All in all it was really cool to do the presentation + live working demo. We think continuous deployment is the next big step in the automation area for development. In the future well definitely go deeper into how we have set this up, by doing some Youtube videos, blog posts and present at more conferences!

Share

Future of Deployment – Part 2.5 Getting your virtual appliance from development to production

Virtual Appliance

In my previous post, “future of deployment, part 2″, i talked about the new ear, which is an image, with an OS and your application.
Now before diving into part 3, which gets you going in creating your own virtual appliance aka “the image”, there is one really big thing i forgot to mention; Some of the benefits of delivering a virtual appliance and getting it from your own development to the production environment! and i’ll list the benefits for administrators/ops and developers.

Ask and thou shall receive, thy environment, immediately :o

So you start developing your new application “CuteAnimalPark” (yes, we at Xebialabs like animals ;) ), and you talk to the admins and ask them for 3 environments, Test, Acceptance and Production. They of-course respond, “that is fine, we’ll have it up-and-running when you can deliver your application”, aka “virtual appliance” and give you a link where you can download the Virtual Image where you can install your application on.

So what happened here? Well i wanted 3 environments and got none!? But i did get some image which is in fact the image that is going to run in production. The image is hardened, has got security enabled, all user accounts and file-system rights are in order and is tuned for real world serving the interwebs.

The job of the operators is to make sure applications get to production and run reliably and fast. They are the ones getting called out of bed if my application dies at 3 AM. So therefore they want to give you an image which is the SAME that is going to run in production.
That means IMHO, one of the day-time jobs of these guys and gals is to prepare the images (OS + middleware) for applications and give them out to development teams so they can in turn create virtual appliances.

Action #1 Get a production ready image to install the application on.
Benefit for Devs #1 You develop and deploy to “really-really-close-to” production system
Benefit for Ops #1 You get an application that is known to run on a production system

Running the production image aka virtual appliance in other environments then production

So when you have installed the application on the image you have to make sure it can be used in other environments then development. The application might need to connect to a database, now the database will mostly differ from one environment to the other, so as soon as the application is placed into the environment, you have to get in there and change the URL and most probably, username and password of the datasource the application is using so it will connect to the e.g. database in test and not the developers own MySQL database on his local machine.

Changing the way an application uses a database should be just as easy by changing the properties on a datasource, and most of the JEE containers make this fairly easy, but i’ve personally seen lots of other properties which are very environment specific which were in property files, in JAR files that needed changing when moving the application from one environment to the other. Not so easy then ;(

Make sure that when you design and develop your application that environment specific properties are easily accessible and can be changed by operators.

Action #2 for Devs Design and Build your application so it can exist in multiple environments
Action #2 for Ops This also applies to Operators, who have to be aware that the image they deliver will need to run in different environments (e.g. use hostnames when installing middleware and don’t use IP-addresses)
Benefit for Devs #2 Your application is portable, it can be moved almost anywhere and administrators can do it without your help
Benefit for Ops #2 Easily move images to wherever you want, move stuff from your private cloud to the public, will be a lot easier! per application.

Deploy Virtual Appliance

After this step, it means that your virtual appliance can now be deployed to the various environments, which of-course are running some sort of virtualization hypervisor from some known vendor. Every time the appliance is deployed, before it is started the operators configure it. Making sure the Datasource connects to the right database, the Queues connect to the right Message broker, and so on.
After configuring the application you can start up the whole bunch and bask in glory. (make sure the ops know in how to properly start the application and how they can tell it works)

The flow from development to production is that simple, but wait! there’s more, troubleshooting?

Again here is the flow from the 2 paragraphs above:
- Get a production like image
- Install your application on it
- Allow operators to move your virtual appliance from test to production (without needing your assistance)

But of-course stuff can go wrong in production, your application might break, has a race condition under extreme load, and you as a developer want to get your hands on production and find out what is wrong. But alas, the operators will not allow you to access production, most likely they’ll send you the logs, thread and heap dumps but that is it. So wouldn’t it be nice if you could have the access to production when it is not used in production?

With virtualization you can get a snapshot of the images running in production, transfer it to your local development environment and really get into finding the problem. There is always the trouble of generating real user requests but this is a very big step forward to get your hands where the real problem occurred.

Action #3 Whenever an issue pops up with an application, create a snapshot of the image and let the developer have access to it
Benefit for Devs #3 You can get to the source of the problem more easily
Benefit for Ops #3 You still don’t need to give access to developers on production if you don’t want to, just let them have a copy of the current state of production

Makes sense?

Well there you have it, this is what i think will be the biggest change in the way we deploy an application from development to production in about 3-5 years. As i mentioned before, post 3 will be about doing it yourself with the current tools available, from Xebialabs of-course and VMWare.

Share

Future of deployment: Part 2 – The Image in the Cloud is the new EAR

SkyIsTheLimit
Last December I wrote my first part on the Future of Deployment explaining the difference between big ol’ servers with a gazillion applications and lots of new shiny small servers with each its own application. This time I’m going to go to the cloud or your virtualized servers and give you my vision of how we are going to package and deploy applications in about 3-5 years.

How we used to deploy an application

Well you all know this one by heart: You get your environment up and running, like, install an application server, setup your database, choose a sql script to run against the database, configure resources and deploy the application in the application server. After everything is in place you start the whole bunch and bask in glory!

Does the above deployment scenario apply to virtualized/cloudy environments. Yes it does, of course! The environment setup is greatly simplified, using stuff like AMIs or virtual images aka appliances ;) you get your database or application server out of the box but configuring and installing the application and configuration/resources is still the same old boring cumbersome task.

Taking ‘it’ all the way to the top

If you look at a lot of the current images out there, they are basically just an Operating System which you can instantiate. Some of the vendors, like Oracle have images that also contain an installed database on top of an OS, like a favorite flavor of Linux. But imagine that you can bundle up your own application with an image which then becomes your own Virtual Appliance. You use your own virtual appliance, (OS + installed application + config files) to set up your environment quickly and voila! no more boring installing application server/database, deploying application per environment.

Sounds too good to be true eh?! In this day and age it unfortunately is.

The ‘biggest’ problem

It’s the size of the virtual image.
Lets say you have created a new release of your application. In total, War + config files + some DDL scripts is about 100MB, not a big deal indeed! Easy to copy across to different environments. Now try and create a virtual image of this release.

Step 1, get image (= 2GB big, OS + installed Application server + installed DB)
Step 2, Install your application on the virtual image (+= 100MB)
Step 3, Prepare image and pack it up.

Total size of your package or if you like bundle (os + appserver + db + app = 2100MB). Try converting and copying your VMWare 2.1GB to Amazon EC2 over the internet! It will take a while and those are a lot of cups of coffee before you can get it up and running. For the smarter persons out there, just use the AMIs already on Amazon and re-bundle which solves the problem :) .

Too simple for you?

So who puts their database on the same virtual server? Nobody, or at least nobody serious ;) Well for these folks you can create a second virtual image with the database and package that with your application image. Now you have 2 images that can be used together, but yet again more uploading and perhaps more disk space to waste.

And what about resources you need to connect to which you can’t package up with your images, like a corporate LDAP or even worse a mainframe system which needs to be accessed via MQSeries and a Message Broker? Well here it still becomes clear that after creating images and instantiating them in environment there are still sometimes activities needed, like for example configuring JMS resources to connect to a remote Queue Manager, to make your application fully-up-and-running-functional. So in a lot of cases we still need automated deployment to get it all going. And while we’re at it, you can have a sneek peek at our Deployit 1.3-Beta which already is beginning to get some of these packaging concepts, like Deployment Package with, Queues, Datasources etc…

Still it’s a nice picture to draw…aka the image is the new EAR

Packaging/bundling up your application with an OS and delivering it as your release. You get to do almost whatever you want to do on that image, whatever is best for YOUR application. Tune not only the application server, but even the OS! Use whatever libraries you want. The sky is the limit! ;) or at least what sysadmins will tolerate.

This is IMHO, the biggest plus! You get to deliver something which is very, very close to the production setup. This will eliminate or greatly reduce runtime and configuration issues which you may face when getting your application running in production.

In part three, I’ll have a look at where the virtualization/cloud vendors, like IBM, Oracle, VMWare (+ Spring, anyone?) are now. How you can start moving from just delivering a war/ear to delivering virtual images/appliances tomorrow (and not in 3-5 years ;) )

Share

Future of deployment: Part 1 – Monuments vs Cheap housing

I’m going to start a series on the future of deployment. How and what do we deploy in, say 5 years or so. Of-course this is my opinion and please add your own ideas in the comments below.

MonumentVsCheapHousing

To start this series off i’m going to talk about the current state of things, or at least what i see at a lot of enterprise customers. Most of the enterprises i’ve been at have physical servers which are used by numerous applications from different development teams. Some of these servers are old and have been in maintenance by operations for years (+4 years ;) ). That means that the server has changed, lots of deltas, aka, patches, deployments etc. have been applied and as my colleague Vincent has stated applying deltas has its cons ;) Of-course i’m talking about servers and not applications and the same rules do not apply, or do they?

Deltas on servers are bad, period.

I think the same rules do apply. Applying deltas might be faster but in the end it will become increasingly harder to map out the path you have taken from 4 years ago up till now! and this is oh-so obvious on the servers themselves. Try and rebuild a 4 year old server where every week at least 5 deployments have been executed, every month a patch or two to the OS-Middleware has been applied and every six months some change to the filesystem has taken place. It is just plain hard.

And here is the prime example

A couple of years ago i witnessed a project that was trying to move their entire server and application environment from one location to another and in the meantime trying to get rid of some out-of-date-standards which were lingering on those servers. They had automated deployment scripts for all their applications, so the only thing they needed to do is make sure they had a clean environment in the new server location where they could install the latest and greatest version of their applications. They tried for 6 months to get it working but failed because they could not properly reproduce the servers at the remote location, so much old-out-of-date-stuff on those servers was needed by applications! So finally in the end they gave up and moved all their servers by restoring a server-backup on the remote site. The lesson this company learned was to spread the amount of applications onto different servers. This allowed them to keep their servers and applications more up-to-date and get rid of out-of-date-standards more visibly.

Introducing the new is easy, getting rid of the old, just let it be…

The company created new servers which were going to be used by new applications. Therefore they could install them almost anyway they wanted to. Those new application deployments could then use the new features of those servers and almost everything was good. Whenever an old application wanted to make use of some of the new functionality only available on new servers they had to adjust their deployment and sometimes code. It was accepted that when using new functionality, you move to a new server with updated JDK, log file paths, more memory, new version of the application server or a portal and so on. Applications had a natural upgrade path, old applications are running on old servers but those servers do not require much maintenance except for the odd patch and clean log files. New applications are running on new servers with better middleware, tools, etc. making maintenance life somewhat easier on a different level.

Different levels of maintenance – Monuments vs. Cheap housing

How can more servers result in lower maintenance, isn’t that just weird? Yes it is! But the difference is this; If i have one server for all my applications it becomes hard to make changes to the server and those applications. Just like 40-people-(applications) living in a monumental-building-(server). For every change you have to figure out what the impact is on the building itself and the people living there. In my own experience, every time i wanted to make some change to the server i had to go through a committee to get approval! The committee consisted of not only hardware/OS/Middleware people but also all the applications people! All 40 of them ;( You might feel my frustration as i requested my third change for that particular server in the year. When we moved to smaller-servers-(cheap housing) with less applications-(small families) it got a whole lot easier to make changes ;) Ok, ok, so the amount of maintenance wasn’t the problem, getting consent from 50 people to change and then finding out if it worked in the monument was the problem. Whereas changing something and/or building a brand new cheap house felt like a breeze!

So what about the future then?

In my next post i’ll explore what is in my opinion the next big thing after the “Monument” and “Cheap housing”. Of-course it has something to do with cloud/virtualization technologies. It will be all about moving appliances! and it is something that Deployit will provide support for.

Share

So what is a deployment really?

You just baked the first release of your application using Maven. Next you start up the administrative console of the application server in the development environment. Then you deploy the fresh loaf of ear-file to the server and fire up your browser to see if you can reach the application. As you try to load the page, you get a DNS error, “Host not found”. Time to phone Bob! – the friendly operator of all that funky infrastructure and middleware. Bob is of course happy to add a DNS record that will point www.app-in-dev.com to an Apache server. “Wait! An Apache server?”-you exclaim! “But it should point to our application server, not an HTTP server thing.”

Bob, by now used to having to teach young developers the intricacies of modern network topologies, calmly explains that all requests coming from a browser first must go through a cluster of HTTP servers before the requests are routed to the application servers. “You also need to configure Apache!” he says. “But I am developing a Java application, I only need to deploy to an application server and then I am done.” you respond. Bob, sighs. “Listen son, pressing the deploy button in the administrative console is only a small sentence in the big deployment story.”

The goal of a deployment

Before I start talking about the full deployment story, you first have to realize what the goal of a deployment is.
It is: “Making an application available to end users”. This means that the end user has opened his/her browser, types “www.app.com” and sees the application in production, hence the www.app.com URL, fully functional!

So what is needed to run the application? Let’s follow the end user’s browser request and see what components it hits;
First it hits a firewall, then an http server, another firewall, the application server and finally the application running in the application server which uses a database to retrieve some data. In total there are 5 components involved to get the application successfully available: we need to not only configure all 5 components but also place some data on some of those components and make sure they are (re-)started in the correct order to get the pretty picture to the end-user.

If I were to write down all the steps needed to do a full deployment of the application in production it would be something like

a. Run create table definition SQL scripts against the database.
b. Configure JDBC data source in the application server
c. Configure http ports and virtual hosts in the application server so the application is reachable for http requests
d. Install the application on the application server
e. Start the application
f. Configure the firewall, by opening ports to allow communication between the http server and the application server
g. Configure an http server so requests that are coming in for “www.app.com” that do NOT end with e.g. .js,.html,.gif are routed through to the application server
h. Place the static html content on the http server
i. (re-)Start the http server so it reads in the new configuration
j. Configure the outside firewall, to allow access from “www.app.com” to route to the right http server

Of course, a more complex environment just adds steps to the deployment step list. For example, in a high available clustered environment, I have to do everything at least twice and make sure that all the components are configured and (re-)started to run in that clustered environment.

A deployment really is…

The above steps make sure that “an application is available to end users” in production. They can generally be broken down into the following deployment categories, and thus this is what a deployment really is:

1. Installing applications (Step d.)
This is the core part of the deployment, where the actual application logic is installed on the server, in J(2)EE/Java this usually means installing an EAR or WAR file.

2. Configuring resources (Step b.)
The application might need data from other systems, like databases or mainframes. The application can use resources to query for or receive data. Resources are typically configured in the application server.

3. Configuring middleware components (Step a, c, f, g, h, j.)
To reach the application or to provide an instance where the application can run or house data on (e.g. create application server clusters, http server instances, create database instance, create/update tables) middleware must be configured.

4. Starting/Stopping components (Step e, i.)
To make sure that a component can function properly, it might need to be (re-)started.

5. and doing all the above (1, 2, 3, 4) in the right order.
To make sure the application starts up nicely, without errors some order has to be maintained as what component needs to (re-)start at what time. E.g. If you install the application (step d.) and start it before the data source is configured (step b), the application might not start up properly. This becomes even more important when deploying a new version of your application in a high available clustered environment.

There is one additional category for companies that use a DTAP (Development, Test, Acceptance, Production) environment setup:
6. Configuring the installed application for different environments
A deployment must make sure to customize the configuration to suit a specific environment. E.g. when you install an application on development it needs some data from the development database, the same application in test needs data from the database in test, and so on.

What a deployment is not…

A lot of people seem to think that installing the application on an application server is the actual deployment. Sure, the button in the application server administration console says, “Deploy application” but that will only do the application server part and will only work in simple environments, like the development or developer workstation environments. As soon as the application reaches a more complex environment with multiple middleware components it just does not cut it anymore and you need additional steps to make the application accessible to test/end users.

What or who can do a real deployment

Most people are relying on other people to make the real deployment happen, manually configuring middleware, pressing the deploy button in the administration console and all that other stuff. Some are trying to automate deployments and are relying on deployment scripts or products like Phurnace or Buildforge, but these will only do a very small part of the real deployment, only installing applications (see category 1 above) and configuring resources (see category 2 above). There aren’t many products around on the market that can do “real” deployment. XebiaLabs offers a product named Deployit that has supports for all 6 deployment categories.

Be aware of what it takes

So if you aren’t in the lucky position of having access to a product that does real deployment, be aware of what needs to be done to get your application available to the end user. Do some scripting, make use of deployment tools and reserve time in your project to let (operations) people configure those middleware components manually.

Share

Multimedia Communication: When to use screencasts/movies in demos.

For the last 9 months I’ve been working as a team member of Xebialabs on a product called Deployit. The product automates deployments of applications. As any Xebia team we use SCRUM for our development. Now at the end of our two week sprint we give a demo to the product owner and stakeholders of what we’ve been building.

We demo deploying applications onto a variety of Application Servers and other Middleware, like for instance WebSphere/Oracle-Bea Application Server/Portal, MQSeries, HTTP Servers and so on… Sometimes demo-ing a story, like deploy application A to application server B can take 10 to 15 minutes. That means, for an hour of demo time we can not show every user story that we finished in our sprint. So we only show the important ones. But what happens when demoing a story can take up to 45 minutes? How can can we cram multiple finished stories into the hour?

Multimedia to the rescue! Whenever we have a story that takes a lot of time to demo, we record a screencast when we are preparing, cut out the long waiting parts and then show it as a movie. Please note; we only do this when we demo a user story where we have to wait for a long time. It also is important to tell the product owner and stakeholders that you are playing a movie that normally takes 45 minutes but has been cut down to 2.

Two months ago we worked on WebSphere Portal deployments. The story; Deploy Portlets, Update Skins-Themes-and-Screens and create Virtual Portal was implemented and added to the Fitnesse test suite. To prepare for the demo, I ran the Portal deployment test case. Fitnesse quickly entered all the necessary data into Deployit. (1 second) Then it hit the “Deploy” button and we were off! (sub 1 second). My Macbook Pro’s CPU fans started spinning up because the deployment to portal was taking place and after 45 minutes it was done! Before i started the test, i recorded my desktop/screen with my favorite screencast software ScreenFlow. After the test ran, i edited all the parts out of the recording where there was no activity on the screen. That left me with a screencast/recorded movie of about 2 minutes! Perfectly demo-able!

Screenflow recording Deployit
During the demo, we played the movie, we got feedback from the product owner and had lots of time left to demo other user stories. Of course we told the product owner that he shouldn’t expect the same portal performance ;) (2 vs. 45 mins.) But that was very clear to him.

It might feel a bit like cheating because we aren’t showing the real “live” user story. But sometimes it isn’t practical. We want feedback from the product owner and stakeholders. The more feedback we get, the better and using screencasts/movies in demos helps us a great deal.

Share

Java Persistence API – Podcast – JSpring ’09 Preview

On the 15th of April the NLJUG (Dutch Java User group) will be holding their J-Spring conference. Four Xebians will be presenting. Every week we’ll be providing a sneak preview on the podcast of one of those presentations.

The second sneak peek is about The Java Persistence API – How do i build a real application by Vincent Partington.

The preview is in Dutch, a full interview in english will be coming in about 4 weeks.

You can find more information here or read Vincent’s JPA blog series.

Vincent’s presentation is from 14.25 to 15.15

So head on over to the show page or subscribe to our podcast!

Share

Software Transactional Memory – Podcast – JSpring ’09 Preview

On the 15th of April the NLJUG (Dutch Java User group) will be holding their J-Spring conference. Four Xebians will be presenting. Every week we’ll be providing a sneak preview on the podcast of one of those presentations.

The first sneak peek is about Software Transactional Memory by Peter Veentjer.

You can find more information at the NLJUG presentation page.

Peter’s presentation is from 11:20 to 12:10.

Hosted by Robert van Loghem.

This preview is in Dutch. After the 15th we’ll be doing a full episode in english about STM.

So head on over to the show page or subscribe to our podcast!

Share