Archive for the ‘Articles’ Category

A Journey to Automated Performance Test Execution Reports

Posted on April 5th 2011 by Joel Deutscher

A performance test execution report summarises the results of a single test execution. These reports are produced after each execution and usually contain varying levels of detail.

In my 10 years of performance test consulting, I have seen reports that are one-pagers, outputted directly by the tool with very little user interaction. I have also seen reports that go into significant detail and require hours of data manipulation to produce. Personally, I prefer the later.

Its not that I like am a stickler for documentation, its more that I believe that an execution report should tell me all of the interesting parts of a test, from response times to resource utilization, I want to know as much as I can about injecting a particular load level against an application. When I walk in to work in the morning, I want to be able to answer that question “How did last nights test go?”.

While working at a major bank, I had the opportunity of defining the test execution report templates for a major application. I worked closely with the production availability team to create a report template that provided everything that was required to determine if changes could result in performance degradation. The sign-off process involved a pile of execution reports that were analysed top to bottom and signed-off one by one. Needless to say, I learned a lot about which content was vital for these reports, and which could be ommitted.

Unfortunately, despite a very happy production support team, and a nomination by the client for a customer service award, there was one problem. The reports took too long to produce. Few elements of the report could be simply copied out of the customers performance tool (LoadRunner), and many took parsing raw results through excel to get the desired result.

After a successful release, I left the client and moved onto a new challenge. I wanted to find a way to maintain the quality of the reports while reducing the amount of effort required to produce them. I heard that after I left, the client had employed some automation for the excel processing parts of the report. While this had reduced their time, it was still a very hands on process. I knew there must be a better way.

6 months on, and I now recieve 95% of the elements of this original report in my inbox automatically after each test execution. I had reached a point where my team can now spend more time on analysis, and less time on producing graphs. It is now clear within minutes of an execution if there is a major problem or not. At last, I had my cake and was eating it too.

Over the next couple of weeks, I will document the journey from looking at the individual elements of an execution report to having an automated report in my inbox after each test.

Dynamic Parameter Names in LoadRunner

Posted on January 4th 2011 by Joel Deutscher

Occasionally, you might want to perform the same action multiple times with different parameters. While you should be very carefull with this type of scripting, the following is a basic example of how to do this in LoadRunner.

The following script will perform three searches on google.com.au using three different parameters.

Action()
{
	int i;
	char temp[10];

	// Progress Through 3 Client IDs stored in {Param_1}, {Param_2} and {Param_3}
	for(i=1;i<4;i++) {
		sprintf(temp,"{Param_%d}",i );
		lr_save_string(lr_eval_string(temp), "Param_Value");
		lr_start_transaction("Search");

		// Extract Variables
		web_reg_save_param("Number_of_Results", "LB=About ", "RB= results", LAST);

		// Validate Content
		web_reg_find("Text={Param_Value} - Google Search", LAST);

		web_url("search", 
			"URL=http://www.google.com.au/#hl=en&q={Param_Value}", 
			"Resource=0", 
			"RecContentType=text/html", 
			"Referer=", 
			"Snapshot=t1.inf", 
			"Mode=HTML", 
			LAST);
 
		lr_end_transaction("Search", LR_AUTO);
	}
	return 0;
}

Timing HTTP Redirects in LoadRunner

Posted on September 29th 2010 by Joel Deutscher

One of the most commonly misscripted elements in performance testing is web authentication. I’m not talking about integrated authentication like SPNEGO, I’m talking about a simple HTTP POST with authentication details followed by the sites authenticated home page. The problem is that the user experiences a two step process.

User Login

In reality the process is actually 3 steps, with the middle step is transparent to the user. Because it is transparent, tools like LoadRunner will attempt to represent the end-user experience and record only two steps. In most cases, this is the desired end-result. The following diagram shows the three steps that occur.

Web Authentication

The issue with recording Logon like this, is that it does not allow you to separate the authentication time from the loading time of the subsequent page. Its a simple process to separate the timing of the authentication and the subsequent page load, and the following code snippet shows you how to do it in LoadRunner.

Action() {
	lr_start_transaction("Open_Logon_Page");

	// Validate Logon Page
	web_reg_find("Text=Lost your password?", LAST);

	// Open Logon Page
	web_url("logon_page",
                "URL=http://www.headwired.com/login.php",
                "TargetFrame=",
                "Resource=0",
                "RecContentType=text/html",
                "Snapshot=t1.inf",
                "Mode=HTML",
                LAST); 

	lr_end_transaction("Open_Logon_Page", LR_AUTO);

	// Disable HTTP Redirects to time Authentication
        web_set_option("MaxRedirectionDepth", "0", LAST);

	lr_start_transaction("Logon");
	lr_start_sub_transaction("Authenticate", "Logon");

	// Find Authenticated URL
	web_reg_save_param("redirect_url", "LB/ic=Location: ", "RB=\r\n", "Search=Headers", LAST);

	// Submit Authentication
	web_submit_data("web_submit_data",
                "Action=http://www.headwired.com/login.php",
		"Method=POST",
		"TargetFrame=",
		"Referer=",
		ITEMDATA,
		"Name=log", "Value={USERNAME}", ENDITEM,
		"Name=pwd", "Value={PASSWORD}", ENDITEM,
		"Name=redirect_to", "Value=http://www.headwired.com/dashboard/", ENDITEM,
		"Name=testcookie", "Value=1", ENDITEM,
		"Name=wp-submit", "Value=Log In", ENDITEM,
		LAST);

	lr_end_sub_transaction("Authenticate", LR_AUTO);

        // Enable HTTP Redirects to time Authentication
        web_set_option("MaxRedirectionDepth", "10", LAST);

	lr_start_sub_transaction("Authenticated_Page", "Logon");

	// Verify Authenticated Page
	web_reg_find("Text=Dashboard", LAST);

	web_url("authenticated_page",
                "URL={redirect_url}",
                "TargetFrame=",
                "Resource=0",
                "RecContentType=text/html",
                "Snapshot=t1.inf",
                "Mode=HTML",
                LAST); 

	lr_end_sub_transaction("Authenticated_Page", LR_AUTO);
	lr_end_transaction("Logon", LR_AUTO);

         return 0; 
}

Are pilots taking performance testing off the critical path?

Posted on June 25th 2010 by Joel Deutscher

A trend I am seeing in solution delivery these days is piloting. With piloting, New functionality can be slowly ramped up in production, or more importantly, ramped down when problems occur.

This detaches performance testing from specific release dates. Essentially the application is regression tested with all new functionality disabled, reducing the amount of testing required to go live. New functionality can be switched on after adequate testing.

As testing has always been the accordion in the SDLC band, making a lot of noise as it’s time line is squashed, this is probably a good thing.

Personally, I am a fan of this style of development. I have seen too many organizations push ahead with releases regardless of testing coverage. This approach allows project managers, vendors and stakeholders to achieve their launch dates without the front page horrors.

Web Browser Comparisons Explained

Posted on June 19th 2010 by Joel Deutscher

A comparison of some popular web browsers by Joseph B. on OS X Daily.

web browser comparisons explained

While I always appreciate a good bashing of Internet Explorer, and I have seen Firefox get bogged down with too many plug-ins, there are a few that I highly recommend.

And when using high-resolution monitors,  I always install NoSquint.