June 1, 2009

Easy to Use is Easy to Automate

Filed under: Automation — Marcus Tettmar @ 2:54 pm

I’ve just finished writing a routine for a customer that automates what I can only describe as a truly horrendous user interface.  I’m not sure who designed it or why they designed it the way they did but I feel sorry for people who have to use this software.  And apparently it is the industry leading software in its niche.

The software is devoid of keyboard shortcuts and there are no menu items. The only way around the software is by clicking the mouse on icons which have no way at all of gaining keyboard focus – no keyboard short cuts and you cannot even Tab to them.

The main data entry screen has a few accelerator keys (shown by an underlined character) dotted around but they are duplicated and don’t seem to work anyway!  So on one screen ALT-S would appear to focus three different fields, but in fact focuses none.

Once logged into the system it would seem the only way back to the main menu is to exit the app and restart it.

The only way to add a customer record is first to search for one.

And this is just the start of it.  

Luckily Macro Scheduler gives us image recognition and screen scraping abilities so even this dreadful user interface can be automated.  We did it.  But I had to spare a thought for the people who use this software every day.  Everything takes twice as long to do as it needs to.  It can’t be fun. It also suggests that the UI wasn’t tested and no consideration was made to its accessibility.

As I said in Why it’s Good to Automate:

Build an application with good keyboard support and your application can be automated more easily. If it can be automated easily it will be easy to use!

Sure there are some types of software where only an image based approach makes sense. But this particular app is just a way to view and manage customer information.

Sorry for the rant. But it just helps to demonstrate how a decent UI can be more easily automated. While we have tools such as image recognition and screen text capture that will help us automate cumbersome interfaces, a well designed UI can be automated more quickly and more efficiently. It also shows how automation can help test an application and ensure it is accessible. If a UI is well designed the UI can be tested, and looking at it the other way around, if the app can’t be automated easily then perhaps the UI is hard to use, especially for people who cannot use a mouse, or rely on screen readers.

May 19, 2009

Multiple Monitors aid Productivity and Debugging

Filed under: Automation, General, Scripting — Marcus Tettmar @ 3:20 pm

If you’re not using more than one monitor, you are missing out big time.  For one thing, some research by the University of Utah found that using two monitors increases productivity 44%.  There’s a good summary and more comment on this on the coding horror blog.

A huge benefit of multiple monitors for Macro Scheduler developers is that it makes developing and debugging automation macros a lot easier.  When I’m building a script that controls another application I will often put the Macro Scheduler editor on one monitor and the application I’m automating on another.  I can then see both side by side, so I don’t need to switch focus back and forth.  I can run my macro as I’m developing it and see the script at the same time as the results.  If I need to debug I can step through the script and see the progress of the script at the same time as the outcome without the changing of focus effecting it.

Debugging a script that simulates a user and needs to change focus can be a bit of a conundrum, since the act of debugging introduces delays, allowing more time for events to complete, and causes loss of focus.  In Macro Scheduler there’s a “Refocus Windows” setting in the Debugger, but even that isn’t enough in some cases.  Being able to work on the macro and see the target application at the same time without either interfering with each other is therefore the best solution.  

If you don’t have a PC or video adaptor that can support more than one monitor you could use ZoneScreen along with a laptop or second PC to act as your second screen.  A single monitor big enough to let you put the editor and target apps side by side without them overlapping would work too.

If you’re stuck with a small monitor and simply can’t have both editor and target application visible at the same time – you may be at a client’s site or working on a notebook – and need to debug code that needs to see the screen, don’t forget you can also set and run to breakpoints.  With a breakpoint you can step through the code and at any time run to the next breakpoint, allowing the macro to whizz through the code to that point without switching focus back to the editor between each step.  So for crucial sections of code which need to, say, capture a screen or scrape some text, it can be very useful.  Once the script reaches the breakpoint you will be returned to the editor where you can continue stepping line by line, or run to the next breakpoint or end.

In my opinion multiple monitors are an absolute must.  But there are limits!

May 14, 2009

Multiple Desktops for Load Testing or Increased Throughput

Filed under: Automation — Marcus Tettmar @ 10:40 am

A question arose in the forums the other day from someone looking at running some scripts for load testing and wanting advice on how best to run multiple instances of the same script.  See the thread here for some useful ideas and links. I thought I’d summarise them here.  

An article on using Macro Scheduler for load testing can be found here.

If you are new to load testing you might not be aware of the options available to you for multiplexing desktops. Obviously you can’t run a script that interacts with the desktop – focuses windows, sends keystrokes and mouse clicks etc – more than once at the same time on the same desktop. Chaos would ensue. If you want to simulate a bunch of users interracting with an interface you’re going to want lots of desktops running at the same time. You could, of course, use lots of real PCs. But that’s a rather expensive approach, especially if you want to run 100 or 200 users at the same time.

Here are some commonly used options:

Windows Terminal Server – Remote Desktop

Windows Server 2003/2008 allows multiple desktops to be active at a time. The standard version comes with a license for up to 5 simultaneous client connections (CAL – Client Access License) and you can buy further CAL packs to increase the number of simultaneous desktops you can have at any one time.  

You can then run multiple desktops – either for the same login, or different users, and have scripts running in each one.  So with one physical machine you can run multiple desktops to simulate multiple users. Clearly the more desktops you run the more the effect on performance, and since you may be trying to test the effect on performance there’s a balance. Many companies will run, say 10 physical machines, running, say 20 desktops each, to give you 200 virtual users.

One thing to consider with this approach is that since each desktop is running on the same machine you should be careful that your scripts don’t try to write to the same resources and cause conflicts – e.g. if they all try to write to the same file at the same time.

Virtualization

Virtual PC is a free product from Microsoft that lets you create virtual machines. You can create a machine, install, say, Windows XP, into it and run that machine in it’s own window on your desktop. You could therefore create multiple virtual machines and run them simultaneously. Each have their own desktop and therefore UI automation routines can run in all of them simultaneously. Unlike with Remote Desktop each desktop is completely separate from the other so you would have no issues with accessing local resources. You would have to consider the Windows licensing implications and how the cost compares to Windows Server client access licenses.  

Creating multiple virtual machines can also take some time, whereas with the remote desktop approach the operating system only needs to be installed once. There is a way to clone virtual machines by copying the VHD files and then running NewSID in the machine to give the OS a new security ID.

VMWare Server is another virtualization platform. There’s also Parallels. Again, when cloning virtual machines you’d have to make sure they have different SIDs. VMWare offers a solution called Lab Manager which I believe is intended to make the process of provisioning multiple machines easier, though I’m not experienced with VMWare.

In each case you would want to ensure that the approach you use isn’t adding to the performance bottleneck or it might make the tests pointless. Verify how many virtual machines or desktops your server can cope with and find the right balance.

You may need to run multiple copies of the same (or different) scripts at the same time for other reasons and these tips will be valid in this case too. For example we’ve worked on automation scenarios where lots of data needs to be entered into a legacy user interface. The data may arrive at different times.  Rather than work through the queue one at a time, multiple desktops are fired up for each item so that there are lots of tasks taking place at the same time, therefore speeding up the entire process.

See also:
Running UI Automation Routines Concurrently

May 12, 2009

The Variable Explorer

Filed under: Automation, General, Scripting — Marcus Tettmar @ 2:15 pm

An experienced Macro Scheduler scripter was recently trying to figure out why the following code wasn’t doing what he expected:

If>seg_1=05
  Let>monLtr=mm
Endif

Apparantly monLtr was always being set to 05.  This told me that “mm” must have been a variable which had earlier been set to 05.   But my friend said “I’ve looked all through the code I can’t see where “mm” is set anywhere”.  

Then I reminded him of the Variable Explorer.  “The what?” he asked.

varexplorerUnder the editor select Tools/Variable Explorer or hit Ctrl-Alt-V and you’ll get a box like this one.  It shows a list of all the variables created by your script.  

Bingo!  There’s mm.  Expand it and you’ll see all the lines where it is set/created.   In this case it’s created on line 40 by the Min command.  

In a long script it’s easy not to see the obvious.  The Variable Explorer makes it easier.

Of course, it would also be sensible to use a better naming convention for the variable to avoid such confusion.  Or use VAREXPLICIT or he could have used {“mm”} to specify the literal string value.   But don’t forget the Variable Explorer as it can save a lot of hunting around.

May 11, 2009

Test Validation Techniques

Filed under: Automation, Testing — Marcus Tettmar @ 1:42 pm

We recently received the following query to support:

I’m interested in Macro Schedular for GUI testing. How do I verify that the test has succeeded or not? 

I thought it would be useful to post my response here:

There are a number of ways you could do this. Which one you use might depend on the type of application you are automating, or your specific requirements. You could:

  • Capture object text to see if it contains the data you would expect using such functions as: GetControlText, GetWindowText or Windows API functions. See: http://www.mjtnet.com/AutomatedTesting.pdf
  • Capture object and other screen text using the Screen Scraping functions: GetWindowTextEx, GetTextAtPoint, GetTextInRect. Compare captured text to expected data. There’s a sample script called Text Capture which you can use to test what text you can capture. Run it, and point the mouse cursor at the text you want to capture and confirm you can see it on the dialog. See Screen Scrape Text Capture Example and Screen Scraping with Macro Scheduler
  • Compare visually: Capture screen shots (or just windows) and compare the captured bitmaps with bitmaps captured at design time. Use the ScreenCapture and CompareBitmaps functions. This solution has the benefit of working with ANY technology on ANY platform. When you create the routine you capture the screens as they should appear when valid. So at runtime after entering data and controlling the app the macro would capture the screen/window and then compare to the valid images thus determining success or failure. See: How to use Image Recognition
  • There may be other options, especially for non-UI processes, such as reading data from the apps database (using DBQuery) or reading from a text file (ReadLn, ReadFile) or CSV file, checking the existence of files etc – depending on what the application you are testing does and what signifies a valid outcome.

Are you using Macro Scheduler for automated testing?  What types of app are you testing and what methods are you using?  Please comment below.

April 30, 2009

Regular Expressions for Dummies

Filed under: General, Scripting — Marcus Tettmar @ 2:19 pm

I recently stumbled upon this series of video tutorials on Regular Expressions:

Regular Expressions for Dummies

If you want to know how to get some power of out of Macro Scheduler‘s new RegEx function you might find the above free video tutorial useful.

One resource seen in the video that I didn’t know about is RegExr – an Online Regular Expression Testing Tool.  It’s free to use and there’s a desktop version you can download too.

But my favourite RegEx tool is RegExBuddy from JG Soft.  An invaluable tool for anyone working with Regular Expressions.  The author also maintains this excellent Regular Expression resource which includes a tutorial, examples and references.

April 27, 2009

Macro Scheduler 11.1.09 Update

Filed under: Announcements — Marcus Tettmar @ 9:53 am

Macro Scheduler 11.1.09 is now available with the following changes:

  • Fixed: ExportData failing to write correct data for subsequent exports of other data labels
  • Fixed: insertion of SRT not completing autoinserted END with subroutine name
  • Fixed: minor syntax highlighting anomolies
  • Fixed: SetDialogObjectFocus caused error if object not visible
  • Fixed: ShowAllChars menu option not being remembered correctly
  • Fixed: Code Folding menu option not being remembered correctly

Registered Downloads/Upgrades | Evaluation Downloads | New License Sales

April 23, 2009

Macro Microdecisions for Macro Economic Impact

Filed under: Automation, General — Marcus Tettmar @ 8:09 am

While many of our customers use Macro Scheduler for automating key business processes I would hazard a guess that a large number of people are using the tool to automate smaller tasks related to their own individual work. In themselves these tasks may not appear too important, and may not even be visible to the upper echelons of management concerned with improving efficiencies of larger systems. But, improving the efficiency of many small tasks can have a big impact on the overall profit and loss of a business.

In “Microdecisions for Macro Impact” on the Harvard Business Review blog, Tom Davenport talks about how small decisions made lots of times by many employees can have a major impact on the business. How these small “microdecisions” are addressed and improved can impact overall performance.

Tom suggests that one approach to improving a micro-decision is to automate it. Automation of a repetitive task not only speeds it up but removes the chance for error. It also frees up the employees’ time to work on other matters. This will come of no surprise to my regular readers and Macro Scheduler users.

As the article mentions, one simple way to improve micro-decisions is to create a checklist to ensure the worker does not miss any key steps. Another common approach is to draw up a flowchart. With Macro Scheduler Pro Enterprise we can combine flowcharts with automation. Macro Scheduler Pro Enterprise includes Workflow Designer which allows you to flowchart a process graphically. This is a great way to analyse and document a business process. Unlike regular flowcharting tools however, once the flowchart has been created and refined you can then start adding real code to it. You then have a documented process which can actually carry out the task for you. The documentation evolves into the solution. If you’ve timed the manual process you can then find out what kind of efficiency savings you are making by analysing the log files (or building in timers in your code) and comparing.

A little goes a long way – automate lots of small procedures like this and you could boost your organisation’s overall performance.

April 20, 2009

Twittering from Macro Scheduler with the Twitter API

Filed under: Scripting, Web/Tech — Marcus Tettmar @ 7:42 am

Way back in the deep and distant past when the Internet was new and Bill Gates thought it was just a passing fad, I remember reading about a Cola vending machine on a University campus that some frivolous young boffins hooked up to the ‘net so that you could check its inventory from anywhere in the world using an old fashioned network command called “finger”. Why? Because they could.

Fast forward to the technologies of the current day and the latest trend of Twitter, and history is repeating itself. In the last week I’ve read about a restaurant that can take orders via Twitter, a bakery tweeting the emergence of fresh loaves from the oven; and, utterly pointless, some guys who created a system which sends a tweet every time their cat enters or exits its cat flap. Why? Well, because they can I guess.

Not wanting to be left out I decided to write some Macro Scheduler code to tweet status updates and monitor replies. Why? Well there might be a good reason for being able to do this – I’m sure someone will have one. Perhaps you have a client who wants you to set up a system to monitor the movement of his cat, process restaurant orders, or your local baker wants an automated fresh-loaf tweeter! But mostly, it’s because we can.

You’ll find the Twitter API documentation here. Here’s the code to Tweet a status update:

Let>username=YOURTWITTERNAME
Let>password=YOURPASSWORD

//Tweet from Macro Scheduler
Let>url=http://%username%:%password%@twitter.com/statuses/update.xml
Let>message=Kitty just left the buildng
HTTPRequest>url,,POST,status=%message%,result

Being serious for a moment I can see how a macro that monitors an application might want to post status updates to Twitter, or a backup script could alert you by Twitter when there’s a problem. It might be a public system, but don’t forget that Twitter profiles can be made private too, and Tweets can be viewed on and sent from your BlackBerry, iPhone, or even by SMS.

The following script sets up a loop which monitors your Twitter stream for “mentions” of your username. This might form the basis of a script which retrieves orders. Perhaps it could listen to Twitter for commands and carry out actions based on what message was sent. Or perhaps you just want a macro which does something when a cat decides to head out for the night. Use your imagination.

Let>username=YOURTWITTERNAME
Let>password=YOURPASSWORD
Let>ini_file=%SCRIPT_DIR%\twit.ini
Let>_delay=30

VBSTART
VBEND

//monitor twitter username "mentions" loop
Label>monitor_loop

Let>url=http://%username%:%password%@twitter.com/statuses/mentions.xml
HTTPRequest>url,,GET,,result

//remove the  portion (I don't need it and it avoids distinguishing the text IDs from the user IDs.
RegEx>[^>](.*?),result,0,user_matches,nf,1,,result

//extract all texts
RegEx>(?<=)[^>]*?(?=),result,0,text_matches,num_texts,0
If>num_texts>0
  //extract all ids
  RegEx>(?<=)[^>]*?(?=),result,0,id_matches,num_ids,0

  //get last known
  Let>last_known_id=0
  IfFileExists>ini_file
    ReadIniFile>ini_file,SETTINGS,LAST_ID,last_known_id
  Else
    WriteLn>ini_file,wlnr,
  Endif

  //iterate through texts
  Let>k=0
  Repeat>k
    Let>k=k+1
    Let>this_id=id_matches_%k%
    If>this_id>last_known_id
      Let>msg_text=text_matches_%k%
      /*
      msg_text contains the message 
      Use your imagination here!
      For now we'll show it in a message
      */
      MessageModal>msg_text
    Endif
  Until>k=num_texts

  //store last ID
  EditIniFile>ini_file,SETTINGS,LAST_ID,id_matches_1
Endif

Wait>_delay
Goto>monitor_loop

The script retrieves the 20 most recent “mentions”. It stores the last seen ID in an INI file so that on the next check it ignores those it has seen before, only retrieving messages with a larger ID number.

This is a quick and dirty solution with no error checking, using RegEx to parse the XML that is returned by the call to Twitter. You may prefer to use the MS XML object as shown here.

Whether this proves useful or completely pointless, I hope you have fun. If you’re using Macro Scheduler with Twitter, please add a comment below to let us know how … and why!

Don’t forget you can follow me on Twitter where I may occassionally say something useful.

April 17, 2009

Working with Windows API Functions

Filed under: Scripting — Marcus Tettmar @ 9:43 am

Macro Scheduler‘s LibFunc command allows you to run functions contained inside DLLs.  A DLL, or Dynamic Link Library, is a file which contains functions that other programs can use.  Windows includes a number of DLLs containing lots of functions that make Windows tick, and other applications are able to call them.  These functions are known as the Windows API (Application Programming Interface). Using LibFunc a Macro Scheduler script can access some of these functions too.

The Windows API Reference can be found here:
http://msdn.microsoft.com/en-us/library/aa383749(VS.85).aspx

This provides a list of functions which you can browse alphabetically or by category.

Data Types

Before I go on I should mention data types.  Not every Windows API function can be used by Macro Scheduler.  This is because Macro Scheduler does not know about every possible Windows data type.  Macro Scheduler currently only knows about integers, long integers and strings. Almost any function that requires or returns integer and/or character based data can be called.  But anything that requires, for example, a record structure or a callback function can not be used.

The API documentation lists the Windows data types here:
http://msdn.microsoft.com/en-us/library/aa383751(VS.85).aspx

Many of these are based on the same fundamental data types.  E.g. all the HANDLE types are just unsigned numbers, so are supported by Macro Scheduler’s long integer.  LPTSTR and LPCTSTR are interchangeable with strings. From this list only CALLBACK and FLOAT are NOT compatible.

So, as long as the function requires or returns only bools, integers or strings, we should be able to use the function in Macro Scheduler.  Note that BOOLs are really just integers.  A BOOL is an integer with value 1 (true) or 0 (false).

An Example: CopyFile

Let’s look at a Windows API function and how we can use it in Macro Scheduler.

Take a look at the CopyFile function:
http://msdn.microsoft.com/en-us/library/aa363851(VS.85).aspx

At the top of the page we are told what this function does:

“Copies an existing file to a new file.”

We are then given the syntax.  You’ll notice that it is provided using C++ syntax.  It certainly helps if you know C++ but it is not essential and hopefully this example will help you understand what the syntax definition is telling us:

BOOL WINAPI CopyFile(
  __in  LPCTSTR lpExistingFileName,
  __in  LPCTSTR lpNewFileName,
  __in  BOOL bFailIfExists
);

The first thing this tells us is that CopyFile returns a BOOL (0 or 1).  Inside the parenthesis we see that the function requires three parameters.  The first two are of type LPCTSTR.  For our purposes this means it is a string.  The third parameter is a BOOL again.

We are then told what each parameter is for, what the function returns and various remarks.

While the names of the parameters are quite self explanatory the documentation gives us more detail.  So we can see that the first parameter lpExistingFileName represents the name of an existing file, lpNewFileName is what we set to the name of the new file we want to copy to, and bFailIfExists can be set to true (1) to make the function fail if the new file already exists or false (0) to overwrite.

We are told that the function returns zero if the function fails, or non zero if it succeeds.

DLL and Function Name

At the end of the page is some information crucial to us in the Requirements section.  This tells us which versions of the operating system support this function and what DLL it is contained in – Kernel32.dll in this case.  Note also that it tells us the alias names of the function.  In this case CopyFileA for the ANSI version and CopyFileW for the unicode version (Why “W” not “U” I hear you ask – W stands for WideString, a special form of string which can contain double byte characters).

So, putting it all together, we end up with the following LibFunc call:

LibFunc>kernel32.dll,CopyFileA,result,c:\source.txt,c:\my docs\new.txt,0

From left to right LibFunc takes the DLL name, then the function name, a variable which should return the result of the call and then the values to pass to the function. One thing to be aware of is that DLL function names are case sensitive. Make sure the function name is entered into the LibFunc command exactly as specified in the API documentation.

Try the above line with a real filename to see it in action.

Passing by Reference

Some DLL functions modify the values being passed to them.  Parameters are passed by reference rather than by value.  This means that what you’re really passing is a pointer to the memory address that stores that value, rather than just the value itself.  So when the function changes that value we can see the new value after the call. 

The way LibFunc handles this is that it puts each parameter value into an array, using the name of the return variable specified.  So if you specify the return value as “result” and the function takes 3 parameters LibFunc would return result, result_1, result_2, and result_3 where result contains the function result and result_1 to result_3 contain the values of the passed parameters which might have changed if the function modifies them.

Here’s an example of a Windows API function which returns data in a passed parameter:

http://msdn.microsoft.com/en-us/library/ms724373(VS.85).aspx

UINT WINAPI GetSystemDirectory(
  __out  LPTSTR lpBuffer,
  __in   UINT uSize
);

Note that the first parameter has the word “out” in front of it.  This signifies that its value is set by the function.  The function also returns an integer.  If we read the docs we see that the function writes the system directory in lpBuffer and returns the number of characters written to lpBuffer.

So I can use the following code to get the system directory:

LibFunc>Kernel32,GetSystemDirectoryA,dir,buffer,255
MidStr>dir_1,1,dir,sys_dir
MessageModal>System Directory: %sys_dir%

Note that I’ve set the return variable to “dir”. We can pass any old value in buffer, but I’ve used “buffer” here to make it obvious what it does.  Remember that LibFunc creates an array named after the result variable.  So we get “dir” containing the number of characters written to the buffer, dir_1 containing the buffer itself and dir_2 will just be 255 because that’s what we passed in but isn’t changed by the function as it is an “__in” parameter.

We set the maximum buffer size to 255.  So we need to extract only the characters returned, which is the reason why the function tells you how many characters it output.  So I’ve used MidStr to extract only those characters from the returned buffer.

Windows Constants

Many times we need to know the value of a Windows Constant.  The documentation for a function may refer to the name of a constant you need to use with the function.  E.g.:

ShowWindow
http://msdn.microsoft.com/en-us/library/ms633548(VS.85).aspx

BOOL ShowWindow(      
    HWND hWnd,
    int nCmdShow
)

The docs say that nCmdShow specifies how the window is to be shown and says it can be one of the following values: SW_FORCEMINIMIZE, SW_HIDE, SW_MAXIMIZE, SW_MINIMIZE, SW_RESTORE and so on.  These are Windows Constants.

In Windows development languages such as C++ and Delphi these constants are defined in the header files. In Macro Scheduler they are not defined, so we need to define them ourselves:

Let>SW_RESTORE=9

But, I hear you ask, how do I know that SW_RESTORE’s value is 9?  Well, if you have a development environment like Visual Studio or Delphi installed you can find out by searching the header files. 

However, if you don’t have this facility there’s a very handy free tool from Microsoft snappily titled “P/Invoke Interop Assistant” which contains a database of Windows functions and constants you can search.  You can download it from:

http://www.codeplex.com/clrinterop

Under the “SigImp Search” tab enter the constant you are looking for and it will tell you its value.

The Windows header files give the constant values in hexadecimal, so if obtaining them from the header files you will need to convert to decimal. The Windows Calculator is handy for doing this. “P/Invoke Interop Assistant” also shows the values in hexadecimal, but if you click the “Generate” button it will create C# or VB code with the value declared as an integer.

STDCALL Calling Convention

Finally, a note about calling conventions.  When a DLL is created the programmer can decide in what order parameters should be passed to the functions and who should clean up afterwards.  Windows API functions use the “stdcall” calling convention in which arguments are passed from right to left and the callee, i.e. the DLL, is responsible for cleaning the stack.  This therefore is the calling convention supported by LibFunc.  You don’t need to worry about this when calling Windows API functions.  But if you come to working with third party or custom DLLs, or ones you have created yourself, you will need to make sure the DLL uses the stdcall convention.