SOFTWARE DEVELOPMENT NETWORK
MAGAZINE ISSN: 2211-6486
IN DIT NUMMER O.A.: Advantage Database Server 11: A first Look < Onderhoudbare GUI tests met Coded UI < Getting testing done in the sprint <
Nummer 115 december 2012 SDN Magazine verschijnt elk kwartaal en is een uitgave van Software Development Network
115
www.sdn.nl
Colofon Uitgave:
voorwoord
Software Development Network Twintigste jaargang No. 115 • december 2012
Bestuur van SDN: Remi Caron, voorzitter Rob Suurland, penningmeester Marcel Meijer, secretaris
Beste SDN magazine lezer,
Redactie: Tjonge, wat hebben we een hete herfst achter de rug zeg! Het was het release seizoen! Delphi kwam met een nieuwe versie, maar Microsoft had ook het één en ander te releasen.
Marcel Meijer (
[email protected])
Aan dit magazine werd meegewerkt door: Roel Hans Bethlehem, Bob Swart, Maarten van Stam, Alexander Meijers, Remi Caron, Marcel Meijer en natuurlijk alle auteurs!
Listings: Zie de website www.sdn.nl voor eventuele source files uit deze uitgave.
Contact: Software Development Network Postbus 506, 7100 AM Winterswijk Tel. (085) 21 01 310 Fax (085) 21 01 311 E-mail:
[email protected]
Vormgeving en opmaak: Reclamebureau Bij Dageraad, Winterswijk www.bijdageraad.nl ©2012 Alle rechten voorbehouden. Niets uit deze uitgave mag worden overgenomen op welke wijze dan ook zonder voorafgaande schriftelijke toestemming van SDN. Tenzij anders vermeld zijn artikelen op persoonlijke titel geschreven en verwoorden zij dus niet noodzakelijkerwijs de mening van het bestuur en/of de redactie. Alle in dit magazine genoemde handelsmerken zijn het eigendom van hun respectievelijke eigenaren.
Op ons laatste SDN event stonden veel sessies in het teken van Windows 8. Toen was Windows 8 al RTM en beschikbaar via MSDN en Technet. De officiële release was uiteindelijk op 26 oktober. Tijdens deze keynote werd ook de Microsoft Surface gelanceerd. Deze veel geroemde iPad killer is vanaf heden beschikbaar, helaas is er nog geen verkooppunt in Nederland. Als je niet kunt wachten, dan heeft WP7.nl alle mogelijkheden op een rijtje gezet. Denk goed na wat je doet. Na de officiële release van Windows 8 werd ook Windows Phone 8 officieel gereleased. Enkele toestellen zijn in Nederland verkrijgbaar. Deze opvolger is van Windows Phone 7 belooft een gouden toekomst voor het Windows Phone platform van Microsoft. De SDK voor dit platform is nu ook beschikbaar, laat je uitdagen om de missing en geweldige apps te maken! Tijdens het //Build event van 30 oktober tot en met 2 november op de Campus van Microsoft waren er ook nog diverse zaken te melden. Zo werd er een nieuwe versie van de Windows Azure SDK gereleased, versie 1.8. Voor het Windows Azure platform waren er meerdere aankondigen, zoals het toevoegen van iOS en Windows Phone 8 targets aan de Windows Azure Mobile Services. Maar ook de aankondiging dat er naast de Windows Azure Market een Windows Azure Store komt. Een hele interessante ontwikkeling voor ISV’s en leveranciers van tools. Ander nieuwtje op het //Build event was, dat TFS Preview de preview status heeft verlaten en nu werkelijk productioneel is geworden. Het mooiste nog, voor teams van 5 personen is http://tfs.visualstudio.com gratis ongeacht het aantal projecten. Als je nog niet de mogelijkheid hebt gehad om TFS on Windows Azure te evalueren, dan zou ik dat nu zeker doen. Kortom op de koude winteravonden die nu gaan komen, is er genoeg te doen om je niet te vervelen. Wat hebben we toch een mooi vak! Dit magazine bevat weer een mooie selectie aan artikelen van Cary Jensen, Clemens Reijnen, Sander Hoogendoorn, Michiel van Otegem, Lammert Vinke en Marcel Meijer. We wensen jullie heel veel leesplezier!
Adverteerders CSC Achmea Compuware Delta-N Macaw
2 13 18 27 36
Met vriendelijke groet, Marcel Meijer
Adverteren? Informatie over adverteren en de advertentietarieven kunt u vinden op www.sdn.nl onder de rubriek Magazine.
magazine voor software development 3
Agenda 2012/2013
Inhoud 03
Voorwoord
04
Inhoudsopgave
04
Agenda
05
Advantage Database Server 11: A first Look
14
Using 32 bit (Legacy) DLL on Windows Azure
17
The Waterfall Dentist Anti-Pattern
7 juni 2013
19
Onderhoudbare GUI tests met Coded UI
• TechEd Europe
22
JSON with Delphi Part 2
26
Mijn hart zingt voor Visual Studio 2012
29
Windows Azure Portal Updates
30
Getting testing done in the sprint
• SDC 3/4 december 2012 Papendal • Microsoft TechDays Nederland 7/8 maart 2013 Den Haag • SDE 1 25 maart 2013
Marcel Meijer
Cary Jensen
Marcel Meijer
Sander Hoogendoorn
• SDE 2
25-28 juni 2013 Amsterdam
Lammert Vinke
Cary Jensen
Michiel van Otegem
• SDE 3 20 september 2013 • PDC of toch weer een //Build? oktober 2013 • SDC 25/26 november 2013
Marcel Meijer
Clemens Reijnen
DELPHI
Cary Jensen
Advantage Database Server 11: A First Look The Advantage Database Server is a high-performance, low maintenance database server that has earned the affection of small-to-medium sized businesses and vertical market developers. It combines speed and high-end features with an ease-of-use that is welcome but unexpected. And now, with the release of Advantage Database Server 11, a whole new world of features and enhancements make Advantage even more powerful and easier to use and manage. This article is designed to introduce you to the many exciting new features introduced in Advantage 11. It begins by looking at the Advantage Web Platform, a RESTful service that makes your data available from anywhere, anytime. Next, we will look at the new online maintenance features, improvements to replication, and the many enhancements to Advantage's support for SQL, the structured query language. Later in this article, you will learn about the new and enhanced connectivity options, general performance updates, and improved error reporting. Advantage Web Platform One of the most anticipated updates in Advantage 11 is the new Advantage Web Platform. The Advantage Web Platform is a RESTful Web service that provides you with the ability to interact with your Advantage databases over the World Wide Web. There are two distinct capabilities that the Advantage Web Platform enables. The first is the ability to read, update, and delete data from your Advantage data dictionaries using a RESTful interface. This feature is implemented through the OData protocol, an industry standard for accessing data using a REST interface. This interface also provides you with access to metadata, stored procedure and query execution, as well as offline storage. The second capability is that it permits you to manage and query your databases from a Web browser, permitting you to access your data from anywhere, using any browser, even from a smart phone. When you install the Advantage Web Platform, you are actually installing a lightweight version of Apache Web server, and it is through this Web server that the Advantage Web Platform provides access to your data. This service is configured by default to listen to HTTP (hypertext transfer protocol) requests on port 6272, and HTTPS (secure HTTP) on port 6282. These port numbers, and other aspects of the Apache Web service can be configured. Please see the Advantage help details. The following sections describe the two principle features provided by the Advantage Web Platform. Data Anywhere Using the Advantage Web Platform and OData OData is a standards-based protocol for accessing data using a RESTful Web interface. REST, which stands for REpresentation State Transfer, is a description of the basic request and response interaction that drives the World Wide Web. A RESTful Web service is one that you can interact with using basic HTTP commands over the Internet.
A significant feature of REST is that you call the methods of your Web service using the values that you pass in your URI (uniform resource identifier), and sometimes pass additional data in the HTTP message body. These methods, when they return a value, return a string. Using the OData protocol, this string will either be in AtomPub (Atom Publishing Protocol), an XML-based format, or JSON (JavaScript Object Notation), and you get to choose which format you want. Because OData uses REST, any client that can communicate across the Internet, and which can issue HTTP requests, can interact with the Advantage Web Platform to retrieve, update, and manage data in your database. Importantly, because OData relies on HTTP, the client from which the RESTful Web service is accessed does not need the Advantage client API installed locally. OData can only be used to interact with Advantage data dictionaries that have been configured for access through the Advantage Web Platform, and only through the Advantage Database Server (the Advantage Web Platform cannot be used with the Advantage Local Server). For information on configuring a database for access through the Advantage Web Platform, please see the Advantage help. Fortunately, nearly every modern client development tool has the capability to communicate using HTTP, including .NET, JavaScript and jQuery, Java, C++, Objective-C, and Delphi, to name just a few. This means that the Advantage Web Platform provides access to your Advantage data from almost any imaginable platform, including Windows, Linux, the Mac, Unix, mobile devices, iOS and Android tablets, and Web pages. OData consists of a sophisticated collection of standards and capabilities. For more information, including examples of how to use OData to communicate to the Advantage Web Platform, see the Advantage help. For more information on OData, visit http://www.odata.org/. The Advantage Web Administrator The Advantage Web Administrator is a Web application that gets installed along side of the Advantage Web Platform. This Web application provides you with a wide range of options, including the ability to view information about connected users, active queries, configuration parameters, communications statistics, and more. Based on how you configure the Advantage Web Administrator, you may also be able to run ad-hoc queries, interact with multiple data dictionaries, and even change configuration parameters (though many
magazine voor software development 5
DELPHI of these configuration changes made within the Advantage Web Administrator require a subsequent re-start of the Advantage Database Server service). To provide an entry point for communication to Advantage through the Advantage Web Administrator, a new concept, the root dictionary, has been introduced. Each server can have at most one root dictionary, and it is the dictionary to which the Advantage Web Administrator initially connects.
adsweb\apache\conf directory, as well as set the JavaScript rootDB variable declaration in the configOptions.js file, located in C:\Program Files\Advantage 11.0\adsweb\apache\htdocs\adsconfig. After changing the adsweb.conf file, you must restart the Advantage Web Platform service. For more information on configuring the Advantage Web Administrator, please see the Advantage help. The Advantage Web Administrator is shown in Figure 2, where the contents of the Error logs are displayed.
You configure the root dictionary for a particular server using the Advantage Configuration Utility. The Advantage Configuration Utility, showing the new Root Dictionary file location, is shown in Figure 1.
Online Maintenance Online maintenance is one of the more popular updates introduced in Advantage 11. Online maintenance lets you pack, reindex, and alter the tables of your database without having to first obtain an exclusive lock on those tables. This feature permits you to make significant changes to your tables without having to limit your users' access to their data. There are three aspects to online maintenance. The first is how Advantage treats these maintenance operations while users are actively using your database. The second concerns a collection of new system stored procedures introduced in Advantage 11 that permit these operations to be initiated programmatically. Finally, each of the online maintenance functions is also associated with a notification that will trigger when the operation is complete. How Online Maintenance Is Applied Normally, pack, reindex, and alter operations require that you have exclusive access to the table on which you are performing the operation. This often means that these operations have to be performed only after taking the database offline, which, in a production environment, can be difficult, or at least inconvenient.
Fig. 1: The Advantage Configuration Utility showing the new Root Data Dictionary Path option You can select one of your existing data dictionaries to serve as your root dictionary. Alternatively, you can create a new, empty data dictionary, and use that as your root dictionary. This second option has the benefit of providing you with greater control over the data dictionary features that you want to expose using the Advantage Web Administrator. In addition to defining a root dictionary using the Advantage Configuration Utility, there are several additional configurations that you must perform. You must specify a location for the root dictionary in the adsweb.conf file, located in the C:\Program Files\Advantage 11.0\
When performing online maintenance, Advantage begins by creating a copy of the original table, and duplicating both the data and its indexes to the copy. It then uses this copy to apply the requested change. For pack and reindex online maintenance, once any pending transactions have been applied, and all users have released their locks, the new table becomes available. Users will begin using that updated table as soon as they make a request that results in a refresh of their cursor, such as navigating to a new record or setting a filter. Because an online alter physically changes the structure of the table (or its indexes), users must actually close and reopen that table before the new structure becomes available. Online maintenance is only supported using the Advantage Database Server. You cannot perform online maintenance from the Advantage Local Server. Online Maintenance System Stored Procedures Advantage 11 introduces three new system stored procedures associated with online maintenance. These are sp_PackTableOnline, sp_PackAllTablesOnline, and sp_ReindexOnline, and they permit you to perform online maintenance programmatically from your custom applications.
Fig. 2: The Advantage Web Administrator
6
MAGAZINE
Calls to these stored procedures return once the operation has been completed on the copy of the table. However, individual users may continue to use the old version of the table until they reposition their cursor. As a result, a return from sp_PackTableOnline, sp_PackAll TablesOnline, or sp_ReindexOnline does not signal that all users are current, only that all users are in a position to use the new version of the table. There is no special system stored procedure associated with online alter operations. Instead, you execute a special version of an ALTER TABLE query, ALTER ONLINE TABLE. Other than the ONLINE
DELPHI keyword, the syntax of an ALTER ONLINE TABLE query is the same as for a normal ALTER TABLE statement. Online Maintenance Notifications Notifications, which were introduced in Advantage 9, provide a mechanism by which a client application can request to be informed by Advantage when a particular operation has taken place. For example, a client may ask to be notified when new records are inserted into a particular table, or when updates are made to a table's contents. There are two parts to a notification. First, a server must issue a specific notification through the execution of the sp_SignalEvent stored procedure. Second, a client must subscribe to a notification and then wait for it. A client subscribes to a notification by calling sp_CreateEvent, and waits for the notification using the sp_WaitFor Event stored procedure. Since the sp_WaitForEvent stored procedure call blocks until the notification is issued, this stored procedure is often called from a worker thread on the client. In the case of online maintenance operations, Advantage always issues a notification when the online operation has reached a stage where users can begin transitioning to the updated table. In the case of online packing or reindexing, a single notification is issued once users can switch to the new table. In the case of an online alter table operation, Advantage will issue a notification once users can be transitioned to the new table. In addition, Advantage will continue to issue notifications every minute until all users have closed their connection to the old version of the table. If you want your client applications to be informed once an online maintenance operation has reached the point where a transition to the new table is possible, your client applications must subscribe to, and listen for, these notifications. A client subscribes to an online maintenance notification by calling the sp_CreateEvent system stored procedure, passing in the first parameter, an event name using a specific pattern. The event name prefix begins with two underscore characters, following by OP, OR, or OA, which corresponds to online pack, online reindex, and online alter, respectively. The middle section of the event name is FinalStage, and the suffix is a single underscore character followed by the name of the table upon which the maintenance was performed. As a result, if you want to subscribe to a notification about the packing of a table named archive, you will use the following string: __OPFinalStage_archive
By comparison, if you want to subscribe to a notification about the alter table operation on a table named customers, you will use the following string: __OAFinalStage_customers
To learn more about notifications in general, and online maintenance notifications in particular, please see the Advantage help. Replication Replication, which was first introduced in Advantage 8, provides a mechanism for duplicating changes made to one database to one or more databases. For example, replication can be used to copy changes made at regional data centers to a centralized data center, where the combined data can be analyzed and reported. Similarly,
replication can be implemented in a bi-directional fashion, permitting two databases to share changes made to the individual databases. There are a number of requirements that must be met before you can implement replication. First of all, replication requires that the source database and the target database both use data dictionaries. Second, both the source data dictionary and the target data dictionary must be accessed by the Advantage Database Server (ADS). Replication cannot be implemented using Advantage Local Server. Finally, an additional replication license must be installed on the source server, and sometimes the target server as well, depending on how you configure your replication. Replication is an extremely valuable mechanism for those developers who need it. Fortunately, there are several significant capabilities added to Advantage's replication introduced in Advantage 11. These include replication to older Advantage servers, additional system stored procedures to examine the replication queue, and the ability to more easily edit the replication queue. Each of these capabilities is described in the following sections. Replication to Older Advantage Servers A significant update, one that I know will delight many developers, is the ability to replicate between the new Advantage 11 server and those servers running older versions of Advantage. In Advantage 11, replication to Advantage 9 and later servers is supported. In all previous implementations of replication, an older Advantage server could replicate to a new server, but a newer server could not replicate to an older one. What makes this update so valuable is that it permits an organization to upgrade one or more of their servers to take advantage of the many new features in Advantage 11 without having to upgrade every server involved in replication. For example, a company may want to upgrade the server at their main headquarters so that they can use the Advantage Web Platform, the new SQL capabilities, and online maintenance. If that server is involved in replication with satellite offices, those offices can continue to use their Advantage 10 servers for the meantime, upgrading at a later date when it is convenient. Replicating to an older server from Advantage 11 requires some setup. See the Advantage help for a detailed description of these steps. Viewing the Replication Queue The replication queue, which is a table in your data dictionary that you designate to hold the records that will be replicated, now displays its data in a more meaningful order. Previously, the replication queue records were displayed in the natural order. Due to efficiencies in how Advantage reuses records when they are deleted, which happens to records in the replication queue once a record has been replicated, the natural order often fails to preserve the order in which the records will actually be replicated. In Advantage 11, the order of records in the replication queue now more accurately reflect the order in which the records will be replicated. In addition, two new stored procedures assist in your work with the replication queue. You can call the system stored procedure sp_GetReplicationEntryDetails to retrieve detailed information about a record in the replication queue, including the text of the update, insert, delete, or merge statement that Advantage has generated to perform the replication on the target database. The sp_TestReplicationConnection permits you to test whether an attempt to replicate to a replication subscription will be successful. This stored procedure is useful since it permits you to test your subscription configuration without actually having to initiate the replication.
magazine voor software development 7
DELPHI Editing the Replication Queue Under normal conditions, replication works fine so long as your servers involved in the replication can maintain their connection, relying on the replication queue for temporary storage for those periods of time when a connection is not possible. However, sometimes anomalies in the data of a record or collection of records prevent the replication from being successful. When this happens, it is necessary to manually delete one or more records from the replication queue in order to allow the replication to proceed normally. Advantage 11 introduces a new system stored procedure that helps in this process. You can now call sp_DeleteReplicationEntry to remove a problem record from the replication queue. Importantly, if this record was involved in a transaction, sp_DeleteReplicationEntry deletes all records associated with that transaction from the replication queue, thereby maintaining the integrity of the data. SQL Enhancements Some of my favorite enhancements introduced in Advantage 11 are associated with Advantage SQL, Advantage's implementation of the structured query language. Among these changes are improvements in parameter access in stored procedures and triggers, an option for variable output parameters from SQL stored procedures, new SQL functions, new system variables, a modulo operator, several new SQL literals, and the SQL command line utility. Each of these features are described in the following sections. Improved Parameter Access in SQL Stored Procedures and Triggers In previous versions of Advantage, it was your responsibility to specifically read your input parameters from the temporary, virtual __input table. This could be done using a cursor, which you needed to open, fetch, and close, or by querying the __input table directly (this table's name is input preceded by two underscore characters). For example, consider the following simple SQL stored procedure: CREATE PROCEDURE GetCustomer ( CustomerID Integer, FirstName Char( 18 ) OUTPUT, LastName Char( 25 ) OUTPUT, Address Char( 50 ) OUTPUT,
technique we can write this same stored procedure using the syntax shown in Figure 3.
Fig. 3: Input parameters of stored procedures can be accessed using the underscore character This technique can also be used in triggers. Instead of having to use a cursor or a SELECT statement to retrieve the fields of the __new table, you can use this same technique. Simply prefix the field names of the new table with a single underscore character to read the values directly. Variable SQL Stored Procedure Output Previously, you had to explicitly declare each of the output parameters of your SQL stored procedure, using the OUTPUT keyword, as seen in the preceding two queries. Furthermore, the output of your SQL stored procedures was necessarily fixed, in terms of the number of fields in the result table. In Advantage 11, you can specify that the output is variable, permitting you to omit the output parameters altogether. When you do this, the number of fields produced by your stored procedure execution depends on the number of fields in the last SELECT statement in your stored procedure.
City Char( 40 ) OUTPUT, State Char( 25 ) OUTPUT) BEGIN
This can be seen in Figure 4, which includes a stored procedure that supports variable output parameters.
DECLARE @Cur CURSOR as SELECT CustomerID FROM __input; OPEN @Cur; TRY FETCH @Cur; INSERT INTO __output SELECT [First Name], [Last Name], Address, City, State FROM Customer WHERE [Customer ID] = @Cur.CustomerID; FINALLY Close @Cur; END; END;
There are a total of four statements associated with the retrieval of the input parameters using this technique, which involves a cursor. And yes, in this case, we could have used a SELECT CustomerID FROM __input statement instead (one statement instead of four), but the cursor approach would have been less verbose if we had five or more input parameters.
Fig. 4: Stored procedures that return result sets can return a variable number of columns
Advantage 11 introduces a new option, which requires no extra lines of code. All you need to do is to reference the name of the input parameter, prefixed by a single underscore character. Using this
SQL Command Line Utility Another highly anticipated addition to Advantage's support for SQL is the new SQL command-line utility. The SQL Command Line Utility
8
MAGAZINE
DELPHI
provides two distinct capabilities. First, it can act as a command line window, similar to the Windows command prompt, from which you can connect to a database and then execute ad hoc queries. In this mode you are presented with a prompt from which you can enter queries, as well as a small set of interactive commands that permit you to connect to databases, list queries you previously executed, flush the cache of previously executed queries, and exit this interactive mode. The second capability that it provides is the ability to create batch operations that execute single SQL statements, or execute SQL scripts that perform multiple operations. One important application for this usage is to create regularly scheduled batch operations, such as initiating backups or performing maintenance operations. The SQL Command Line Utility is installed with many of the Advantage packages, including the Advantage Data Architect. To run the SQL Command Line in interactive mode, run asqlcmd.exe from a command prompt, or run it from the Window Explorer. Figure 5 shows the SQL Command Line Utility running in interactive mode.
returns true if a string ends with a particular substring. Finally, SubstringOf returns true if the specified substring is contained in the specified string. The fourth function is LastRowID. This interesting function returns the unique row id value for the most recently added row in a table. You can use this function to identify the most recently added record to a table, without knowing any of the particulars of the table's structure, such as its primary key, or without having to rely on auto increment fields, which introduce a whole slew of problems on their own. The new SQL engine functions are listed in Table 1.
Table 1: The New SQL Engine Functions The Modulo Operator Advantage SQL supports a new operator, the modulo operator (%). The modulo is the remainder following a division operation on two numbers. For example, if you divide 5 by 4, the remainder, or modulo, is 1. If you divide 6 by 2, the modulo is 0. Figure 6 shows the use of the modulo operator in a query and the result that it produces.
Fig. 5: The SQL Command Line Utility in interactive mode In batch mode, you will execute asqlcmd.exe from a batch file, or by scheduling a task using the Windows Task Scheduler. When run in this mode, you will pass command line parameters to the SQL Command Line Utility to provide it with connection string information, or direct it to read from one or more files where it will find connection information, queries to execute, or SQL scripts. Fig. 6: A query using the modulo operator Here is a typical scenario of how the SQL Command Line Utility might be used. Imagine that your database is part of an interactive shopping Web site. During the day users might browse your inventory, and create virtual shopping carts, which are records in your database that identify the items users have selected. While many of your customers will continue to checkout, after which you can delete the contents of the temporary shopping carts, moving those items to an invoice table, other users will lose interest and leave your site without checking out. Using the Windows Task Scheduler, you might run a batch file late each evening. This batch file could invoke the SQL Command Line Utility, instructing it to run a SQL script that looks for shopping carts that never made it to checkout, and which have remained unused for more than four hours or so. Those shopping carts could be considered abandoned, and the SQL script could delete those unnecessary records from their tables.
The modulo operator performs essentially the same task as the MOD() SQL engine function. New System Variables System variables are symbols that are automatically available within your SQL queries and SQL scripts. Unlike the custom variables that you must declare in your SQL scripts, system variables are predefined, and can be used in expressions simply by referencing them. Five new system variables are introduced in Advantage 11, and all of them are connection-level system variables. Connection-level system variables are prefixed by two colons, the letters conn, followed by a period and the variable identifier. An example of one of these new connection-level system variables is ::conn.OSUserLoginName.
New SQL Engine Functions SQL engine functions are subroutines that you can use in your SQL queries, whether they are used in stored procedures, SQL scripts, user defined functions, or triggers. The SQL engine includes four new functions. Three of these provide information about the contents of strings. StartsWith returns True if a specified string begins with a specified substring. Likewise, EndsWith
Table 2: The New System Variables
magazine voor software development 9
DELPHI
Table 2 contains a list of the five new system variables, along with a brief description of their associated values.
user account with specific privileges. For example, sp_PackTableOnline is only available if executed from a user account having the DB:Admin role or appropriate effective permissions.
Figure 7 shows a query that includes the five new system variables, as well as the resulting output. This particular query was executed from a virtual machine.
Table 3: New System Stored Procedures Fig. 7: A query using the new system variables New SQL Literals Four new SQL literals have been introduced for Advantage SQL. Three of these literals can be used to represent Timestamp, Date, and Time values in queries or SQL scripts, and supplement existing literals for those types. For example, in addition to previously available literals for Timestamp values, you can now use a literal with the following format: datetime'yyyy-mm-dd hh:mm:ss'
For example, the following query will select all records where the FromTimestamp field is less than 3:00am on January 1, 2012: SELECT * FROM [Accounts] WHERE [FromTimestamp] < datetime'2012-
Other SQL Enhancements In addition to the SQL features mentioned above, there are several other enhancements to Advantage SQL. For one, there is now support for SQL scripts larger than 32 K in size for SQL-based views, triggers, stored procedures, and user defined functions. Advantage now also supports transaction logs larger than 4 GB in size, as well as SQL intermediate files greater than 4GB (in both 32bit and 64-bit environments). While few users will be affected by these changes, those users who do need such large operations will no doubt welcome the removal of these limitations. Connection Enhancements There are a number of enhancements found in Advantage 11 that are associated with connections. These are discussed in the following sections.
01-01 03:00:00';
Date and Time literals can now also be represented by time'hh:mm:ss' and date'yyyy-mm-dd', respectively. The fourth new literal type represents binary data. You define a binary literal by preceding a string with the letter x (upper or lower case). The string must contain a hexadecimal value consisting of 0-9 and a-f and A-F. Spaces can be used for readability, but are not required. There is no limit to the number of characters that can appear in the string. The following two queries are equivalent, and both test the Key binary field for a very small binary value: SELECT * FROM Products WHERE Key = x'29 a2 f3 0e 16 76 ee';
Connection Pooling Connections made through the Advantage Client Engine (ACE), which is used by most of the Advantage connectivity options, now support connection pooling. When pooling is enabled, the Advantage Client Engine will not automatically close a connection that has been returned from a client application. Instead, the connection may be kept active for a period of time, making it available if another client requests a connection. Connection pooling is particularly useful in situations where many different connections are required for short durations. This scenario is particularly common in Internet applications. Making and dropping connections to a database tend to be expensive operations, and connection pooling goes a long way to eliminating a significant bottleneck.
SELECT * FROM Products WHERE Key = X'29a2f30e1676ee';
New System Stored Procedures System stored procedures are built-in routines that you can execute using the SQL EXECUTE PROCEDURE syntax. These stored procedures are typically used to control or interrogate your databases, permitting you to add intelligence to the various operations that you need to perform. In discussing some of the new features introduced in Advantage 11, I have mentioned in passing new system stored procedures that you can use. Table 3 contains a list of all of the new system stored procedures that are introduced in Advantage 11. Many of these stored procedures, like many of those introduced in previous versions of Advantage, require that they be executed from a
10
MAGAZINE
Connection pooling is enabled in the Advantage Client Engine through the new Pooling connection string parameters. When Pooling is set to True, connection pooling is enabled. The default is False. Additional connection string options that apply to connection pooling include Connection Timeout (how many seconds a client will be permitted to wait for a connection), Connection LifeTime (how long a connection in the pool will remain open), Max Pool Size (the maximum number of connections to maintain in the connection pool), and Unused Timeout (how long an unused connection in the pool will exist before being closed). Other Connection String Options In addition to connection pooling, the Advantage Client Engine introduces a number of additional connection string options. All of these options are supported by the new AdsConnection101 API.
DELPHI
These connection string parameters include Decimals, ShowDeleted, and DateFormat.
its OData support, permits access to your data from almost any development environment.
Web Versus Traditional Connections In previous versions of Advantage, all connections were treated similarly, in that they all consumed one of the available licensed connections. For example, a connection from an Internet-based application consumed a connection that could have also been used by a traditional client application (though in many cases, there were additional licensing issues that either limited or prohibited the use of Advantage from an Internet-based application).
While this is the case, Advantage's language and environment specific access mechanisms tend to provide direct support that greatly exceeds the "lowest common denominator" approach offered by open specifications such as OData. As a result, it is significant that Advantage has introduced additional connectivity options in Advantage 11.
With Advantage 11, you can specifically designate how many connections are allocated for Advantage Web Platform versus traditional connections. This can be seen in Figure 8, where the Maximum Web Platform Users option permits you to limit how many of the connections are consumed by Web Platform operations.
These connectivity options can be divided into two groups: new connectivity options and enhanced connectivity options. These are discussed separately in the following sections. New Connectivity Options In addition to the connectivity options enabled by the Advantage Web Platform's support for OData, Advantage 11 introduces three new developer libraries. Together, these provide access for developers using Ruby, Python, and PHP. There are actually two APIs (application programming interfaces) introduced in Advantage 11 intended for Ruby developers. The first is the Advantage Ruby API, and it provides Ruby developers access to the Advantage Client Engine API. The second is a package for Ruby developers that uses the Advantage Ruby API, and it implements ActiveRecord, an object-relational mapping (ORM) API commonly associated with Ruby on Rails. The new Advantage Python Driver gives Python developers access to Advantage. There is also now an Advantage Django Backend, a package that uses the Advantage Python Driver to enable Advantage access from Django, a Python Web framework that supports RAD (rapid application development) for Web sites. Advantage 11 also introduces a new PHP driver. While Advantage has supported PHP for many years with its Advantage PHP Driver, a new driver, the Advantage PDO (PHP Data Objects), is also available, increasing Advantage's support for PHP development.
Fig. 8: The new Web Platform Users option reports the number of available Web platform connections Even if you do not configure a Web connection (leaving Web Platform Users set to zero), Advantage allows one connection by the Advantage Web Platform for connecting the Advantage Web Administrator for configuration purposes. If you are also using the Advantage Web Platform for its OData connectivity, you will definitely want to set Maximum Web Platform Users to a positive integer, to accommodate the additional connections through the RESTful service. One thing to keep in mind when configuring your Maximum Web Platform Users is that clients that connect through the Advantage Web Platform using OData do not share connections from the connection pool in a manner typical of other Internet-based architectures. Instead, these users are treated as normal clients, and each individual connection through OData consumes one user license, and that license remains in use by that connection until that user disconnects, or the connection times out. In other words, you should set Maximum Web Platform Users to the maximum number of OData clients that your system is designed to support. Developer-Related Enhancements While Advantage has always been notable for its many connectivity options, this release of Advantage has seen a significant increase in these options. As discussed earlier, the Advantage Web Platform, and
Enhanced Connectivity Options Some of Advantage's existing connectivity options have been updated with the release of Advantage 11. For example, The Advantage RDD (replaceable database driver) for Visual Objects now supports parameters in the AdsServerObject class. In addition, a 64-bit version of the Advantage TDataSet for Lazarus is now available. Lazarus is a free IDE (interactive development environment) for the Free Pascal Compiler. The Advantage .NET Data Provider also received an upgrade. The templates for creating Advantage AEPs (Advantage Extended Procedures) and triggers using C# for .NET and Visual Basic for .NET have been improved. But most of the enhanced data connectivity features can be found in Advantage's support for Delphi. Advantage 11 provides support for Delphi XE2, both 32-bit and 64-bit Windows compilation. This means that you can now build 64-bit AEPs and triggers for 64-bit Advantage using Delphi. Another improvement introduced recently in Delphi, delayed loading of DLLs, has also now been implemented in the Advantage Delphi Components library. This feature allows the Advantage components to delay loading of the ACE DLLs until they are first needed, as opposed to being loaded statically when your Delphi application is initializing. Delayed static loading of DLLs was introduced in Delphi 2010. ADS 11 introduced a number of new connection string options, which
magazine voor software development 11
DELPHI
are represented by the AdsConnection101 interface. These are supported in the TAdsConnection component found in the Advantage Delphi Components package, specifically from the ExtraConnectionString property. The enhanced TAdsConnection component also introduces a new DateFormat property, which permits your application to define the default format for dates on a per connection basis. Advantage has also improved its support for the TDataSet Locate method. This method now makes more efficient use of available indexes, improving the performance of Locate invocations. Additional Improvements The updates, additions, and enhancements that I've mentioned up until now have applied to specific areas of developer concern. But there are additional improvements found in Advantage 11 that I am categorizing as miscellaneous. One notable update that has been found in all of the preceding releases of Advantage is once again present in Advantage 11, and it is related to performance. These improvements are scattered about the product, and will effect your applications depending on which of Advantage's features you use. But one of these enhancements, in particular, will effect nearly all Advantage installations. Advantage 11 has updated expression parsing, and this provides a significant improvement in the speed of opening tables and indexes. And on some platforms, such as Windows Server 2008 R2 and those Advantage installations under significant concurrent load, the performance improvement is quite large. Another area of general improvement is related to error messages. A number of error messages have been updated, providing additional information not previously available. For example, some error messages that previously reported a problem with a field now actually include the name of the field where the problem has occurred, a welcome update that will be appreciated by developers. The detection of corruption in memo fields in Advantage 11 has also improved. And, when a problem is detected, an appropriate exception is raised, alerting you to the issue and helping you to resolve the problem by packing the table, or taking some similar preventative measure. Another error message-related update is revealed by the new option for database restore operations to suppress warnings associated with automatic table creation. Many developers do not consider these to be actionable errors. Warnings that were previously created when an auto-create table was being created can now be suppressed. Advantage 11 has also introduced a number of enhancements related to security and encryption. For example, the AdsBackup utility now supports FIPS (Federal Information Processing Standard) and SSL (secure socket layout) options, as do many of the Advantage client libraries. Note that support for FIPS 140-2, a government standard that defines security standards for encryption support, requires the FIPS Encryption Security Option Add-on, which is a separate license available from Advantage. Finally, there are two new roles that can be applied to users and groups. These are administrative in nature. SERVER:Admin is a role that provides the users to which it is applied the rights to a number of administrative tasks associated with the new root dictionary when connected with the Advantage Web Administrator. Likewise, SERVER:Monitor is a role that provides its users with enhancement information gathering, again when connected to the root dictionary.
12
MAGAZINE
See "What's New in Advantage 11" in the Advantage help for a complete list of new and enhanced features found in Advantage 11. Summary Similar to previous releases of Advantage, Advantage 11 comes filled with a wealth of new features that continue the tradition of high performance and ease-of-use. Some of these features, such as the support for online maintenance and the Advantage Web Platform, will fundamentally change the way that some users manage and access their data. Others, such as the default fetching of stored procedure parameters and variable output parameters, will dramatically improve developer productivity. These features, when taken together, make Advantage 11 yet another in a string of must have upgrades. •
Cary Jensen Cary Jensen is the bestselling author of more than 20 books on software development, including Delphi in Depth: ClientDataSets, and winner of the 2002 and 2003 Delphi Informant Reader's Choice Award for Best Training. A frequent speaker at conferences, workshops, and seminars throughout much of the world, he is widely regarded for his self-effacing humor and practical approaches to complex issues. Cary's company Web site is at: http://www. JensenDataSystems.com. Cary has a Ph.D. from Rice University in Human Factors Psychology, specializing in human-computer interaction.
TIP: Delphi XE3 Starter Essentials Begin september is het Delphi XE3 Starter Essentials ebook uitgekomen van Bob Swart, dat gratis te downloaden is voor alle geregistreerde Delphi XE3 Starter editie gebruikers. SDN leden kunnen dit boek nu ook gratis downloaden van http://www.eBob42.com/sdn/DelphiXE3Starter.zip met als password van de ZIP file het woord "Brandevoort" (de naam van de wijk in Helmond waar de auteur woonachtig is). Het betreft de eerste van een verzameling elektronische cursusboeken over Delphi XE3 die de komende maanden zullen verschiijnen, zie http://www.eBob42.com/courseware voor meer details.
.NET is uit. Wi ndo ws Az ure SDK 1.8 for 20 dio 10 en Vis ual De SDK is er voo r Vis ual Stu ins tal leren via de Stu dio 2012 . Je kun t hem Ins tal ler. Als je in de Micro sof t We b Platfo rm ” typ t, dan vin d je de zoe kbo x “Windows Az ure nodige ins tal l spu llen .
CLOUD
Marcel Meijer
Using 32 bit (Legacy) DLL on Windows Azure Op Windows Azure is (bijna) alles mogelijk. Zo ben je niet beperkt tot alleen applicaties op basis van .NET. Maar ook Java, PHP, NodeJS en nog veel meer talen en taaltjes zijn heel goed bruikbaar op het Windows Azure platform, Windows Azure Websites en Windows Azure Virtual Machine. Windows Azure is in de basis een 64 bits Windows 2008 server park. Maar ook dan ben je niet beperkt tot het gebruik van alleen 64 bits programma’s of dll’s. Dit is erg prettig, want zo hoef je ‘legacy’ software niet perse te verherbouwen. Of deze legacy nu geschreven is een Microsoft taal of anders maakt dan ook weer niet zoveel uit. Zo hebben wij voor een klant een Windows Azure Proof of Concept uitgevoerd, waarbij we te maken hadden met een 32 bits C++ dll met een soort memory leak. Het resultaat moest wel in de Cloud kunnen draaien en dan ook nog eens netjes schaalbaar zijn.
return result; }
Door het gebruik van de Mutex moet iedere aanroep wachten tot de vorige aanroep klaar is. Dat is vervelend als je een behoorlijke load verwacht. Maar we gaan het op Windows Azure gaan hosten. Daar kunnen we mooi schalen en op deze manier de load aan. Blijft nog wel het 32 bits probleem. Hiervoor zijn 2 oplossingen. 1. We kunnen de DLL hosten in een 32 bits Console applicatie. 2. We kunnen de app pool van IIS in 32 bits mode zetten. Laten we beide eens bekijken. Oplossing 1: Het hosten van de DLL in een 32 bits Console applicatie. We maken een Console applicatie en geven bij de Build opties aan dat het Platform target x86 is. In de main van deze Console applicatie zetten we dan deze code.
Hoe hebben we dat aangepakt? Laten we eerst eens kijken naar de C++ dll. Zoals gezegd dit component heeft een soort memory leak. Vooraf nog even de dll was ooit bedoeld en gebruikt in een stand alone desktop applicaties. De dll is niet echt bedoeld om in een gedistribueerde omgeving te werken. Als het component door 2 websites gelijktijdig aangeroepen werd, dan werd het resultaat van beide berekeningen vermengd met elkaar. De dll heeft een stukje shared memory, dat niet per proces uniek is en daardoor vindt de vermenging plaats. In .NET is dit relatief simpel op te lossen. Door gebruik te maken van een Mutex.
static void Main(string[] args) { Uri address = new Uri("net.pipe://localhost/CalculatorService"); NetNamedPipeBinding binding = new NetNamedPipeBinding(); binding.ReceiveTimeout = TimeSpan.MaxValue; using (ServiceHost host = new ServiceHost(typeof(CalculatorDll))) { var ff = host.AddServiceEndpoint(typeof(ICalcSer-
public int Calculate32Mutex() {
vice), int result = 0;
binding, address); ServiceMetadataBehavior metadata =
mutex.WaitOne();
new ServiceMetadataBehavior();
try host.Description.Behaviors.Add(metadata);
{ result = Calculate32(); // calls the actual 32bit dll function
host.Description.Behaviors.OfType<ServiceDebugBehavior>()
}
.First().IncludeExceptionDetailInFaults = true;
finally {
Binding mexBinding = mutex.ReleaseMutex();
}
MetadataExchangeBindings.CreateMexNamedPipeBinding(); Uri mexAddress =
14
MAGAZINE
CLOUD
new Uri("net.pipe://localhost/CalculatorSer-
CalcServiceClient client;
vice/Mex");
client = (CalcServiceClient)Application["CalcServiceClient"];
host.AddServiceEndpoint(typeof(IMetadataExchange), mexBinding, mexAddress);
if (client == null || client.State !=
host.Open();
System.ServiceModel.CommunicationState.Opened) {
Console.WriteLine("The receiver is ready");
client = GetCalcServiceClient();
Console.ReadLine();
Application["CalcServiceClient"] = client; }
} }
}
Met behulp van een named net pipe wordt de service beschikbaar. De methode op deze service bevat dan een call naar de functie in de 32 bits dll.
De GetCalcServiceClient ziet er zo uit. private CalcServiceClient GetCalcServiceClient() {
[DllImport("Win32Project1.dll", SetLastError = true)]
CalcServiceClient serv = null;
public static extern Int32 Calculate(Int32 delay); int retryCount = 0; public int RekenDllExample()
bool connected = false; while (retryCount < ClientInitTimeOut * 10)
{
{
return Calculate(2000);
try
}
{
Om dit geheel te kunnen schalen heb ik er voor gekozen om deze achter een ‘normale’ WCF service te hangen. Als je hem rechtstreeks in de Website wil gebruiken, dan kan dat natuurlijk ook. Maar dan wordt het schalen erg beperkt.
EndpointAddress address = new EndpointAddress( "net.pipe://localhost/CalculatorService"); NetNamedPipeBinding binding = new NetNamedPipe-
Op de WCF service moet dan een koppeling met deze Named Net pipe gemaakt worden. Het handigst is dat te doen in de Global.asax en wel in de Application_BeginRequest methode. Daar heb ik twee methodes: een om de DllHost te starten en een om de koppeling te maken.
Binding(); binding.ReceiveTimeout = TimeSpan.MaxValue; serv = new CalcServiceClient(binding, address); serv.Open(); if (serv.State ==
string dllHostPath = @"Redist\DllHostx86.exe"; private const int ClientInitTimeOut = 20; // in seconds
System.ServiceModel.CommunicationState.Opened) {
protected void Application_BeginRequest(object sender, Even-
connected = true;
tArgs e)
break;
{
} // Make sure that our dll host is running
}
EnsureDllHostRunning();
catch (Exception e) {
// Make sure the client is connected
}
EnsureCalcServiceClientConnected(); }
retryCount++; System.Threading.Thread.Sleep(100); }
private void EnsureDllHostRunning() { Process[] p =
if (!connected)
Process.GetProcessesByName(
{
Path.GetFileNameWithoutExtension(dllHostPath));
throw new TimeoutException(
if (p.Length == 0)
"Couldn't connect to the calculator service.");
{
} Application["CalcServiceClient"] = null; ProcessStartInfo psi = new ProcessStartInfo(Path.Combine(
return serv; }
AppDomain.CurrentDomain.BaseDirectory, dllHostPath).ToString()); Process dllHost = Process.Start(psi); } } private void EnsureCalcServiceClientConnected() {
In je project moet je dan ook een Service reference leggen naar de DllHost Console applicatie. Dit is relatief eenvoudig. Je start de Console applicatie op en kies voor Add Service Reference. Hierna zijn er in de Web.config endpoint gegevens toegevoegd. Dit endpoint is net.pipe://localhost/CalculatorService. Maak je geen zorgen over Localhost, want de DllHost applicatie en de WCF service die uiteindelijk het endpoint aanroept, draaien in dezelfde instance. Daarmee is
magazine voor software development 15
CLOUD
localhost altijd waar, ook in de Cloud. Verder moet je dan een Folder aan je solution toevoegen waarin je de DllHost console app en de 32 bits dll hebt zitten. Vergeet dan niet om de property ‘Copy to Output Directory’ dan te zetten op ‘Copy Always’ or ‘Copy if newer’.
getal groter dan 1 dan is het resultaat gemixt met een andere aanroep. Dit kun je testen door twee browser naast elkaar open te zetten en bij gelijktijdig (er is nog tijd) drukken op de knop zal er een 0 en een 1 verschijnen. Tenzij er meerdere bloglezers op hetzelfde moment aan het testen zijn ;-)
Oplossing 2: We kunnen de app pool van IIS in 32 bits mode zetten Deze oplossing is het minst ingrijpend. Enige lastige (ik ben geen IT pro meneer) hoe kun je dat doen vanuit een script. Ten slotte kan je alles op een Windows Azure instance zolang je het maar kunt automatiseren. Als het gescript is, dan kun je met een Startup task het script laten uitvoeren en is het helemaal geregeld.
Als je dat doet met de Mutex knoppen, dan zal voor beide het resultaat 0 zijn. http://rekenmoduletest.cloudapp.net
Het commando om dit te doen is:
Referenties: http://blogs.msdn.com/b/haniatassi/archive/2009/03/20/using-a32bit-dll-in-the-windows-azure.aspx
REM make apppools 32bit
Bovenstaand verhaal gaat niet alleen op voor Windows Azure, maar feitelijk voor elke 64 bits server omgeving on premise of bij een andere hosting partij.
%windir%\system32\inetsrv\appcmd set config -section:applicationPools -applicationPoolDefaults.enable32BitAppOnWin64:true
http://blogs.msdn.com/b/rakkimk/archive/2007/11/03/iis7-running32-bit-and-64-bit-asp-net-versions-at-the-same-time-on-differentworker-processes.aspx •
Dit commando in een startup.cmd bestand. Plaatsen in het desbetreffende project. In de ServiceDefinition.csdef definiëren we de startup task en we zijn klaar. <Startup>
Verder moet je dan een Folder aan je solution toevoegen waarin de 32 bits dll staat. Vergeet dan niet om de property ‘Copy to Output Directory’ dan te zetten op ‘Copy Always’ or ‘Copy if newer’. Nadeel van de oplossingen: Als er dynamische configuratie nodig is, dan is de Console applicatie het minst handig. Zoals gezegd het is een Console applicatie met een app.config. Default kan deze niet zomaar uit de Service configuration settings lezen. Bij de andere oplossing kan de Webservice er gewoon wel bij. Testen Om de twee oplossingen te testen heb ik een frontend gemaakt. Daarop staan 4 knoppen. De eerste twee knoppen roepen een WCF service aan die gebruik maakt van de Dll Host console applicatie. De laatste twee knoppen roepen een WCF service aan die gebruik maakt van AppPool in 32 bits mode. Er staan labels result, waar een 0 of een 1 of hoger in komt. Als het getal 0 is dan werd de rekenmodule aangeroepen en is er niets gemixt met een andere aanroep. Is het
16
MAGAZINE
Marcel Meijer Al meer dan 15 jaar begeeft deze 42 jarige zich in de wereld van de ICT. Op dit moment houdt hij zich voornamelijk bezig met Azure, Cloud, C#, Software Ontwikkeling, Architectuur in het algemeen en Windows Phone 7. Ook is hij bezig met Biztalk en Sharepoint. Hij werkt als Senior Architect bij VX Company. In zijn vrije tijd is hij .NET track owner, eindredacteur en bestuurslid van de SDN. Binnen de SDN is hij verantwoordelijk voor het regelen van sprekers voor de SDN Events (SDE), het regelen/redigeren van artikelen voor het SDN Magazine, mede verantwoordelijk voor de eindredactie van de hardcopy en digitale magazines en de inhoud van de SDN Conferences. Op 1 oktober 2010 werd hij MVP.
TIP: Windows Azure Store Echt nieuw op de portal zijn Add-on en Store. De Store is interessant voor ISV of leveranciers van handige tools. Op dit moment is de Store alleen beschikbaar in de US. Het idee van de store is, dat je daar tools/apps kunt kopen, die je in je eigen subscription installeert en gebruikt. De aangeschafte zaken komen dan te staan bij Add-on. Je ben hierbij dus niet afhankelijk van de derde partij, zoals dat wel het geval is bij de Windows Azure Market. De hele verkoop afhandelijk zal dan door Microsoft geregeld worden. Dit biedt wel weer hele mooie kansen voor diverse leveranciers en ontwikkelaars om ook hier handige tooltjes en functionaliteiten aan te bieden bij de bron. De hele afhandeling op de Windows Azure portal is een hele API, waar je op kunt aansluiten. Het principe werkt zoals je ook een Windows Azure website uitkiest of een Virtual Machine. Je kunt kiezen uit een lijst met verschillende tools. Je kunt dan voor een betaalplan kiezen en uiteindelijk betalen. Daarna zal het tool geserviced worden.
GENERAL
Sander Hoogendoorn
The Waterfall Dentist Anti-Pattern Believe it or not, but I have a confession to make. I’m currently in a pure waterfall project. It’s my first in many, many years and despite the fact that I love the technology, I don’t like our way or working it a bit. During the first phase of this project we are trying to deliver twenty functional work items. All of these twenty work items are analyzed first, then reviewed. After all review comments have been processed, all work items are designed. And again reviewed. After all review comments have been processed again, the developers start constructing them. And once they finish the testers have a go at the software produced. Yes, I know this is efficient nor effective. And yes, the testers find lots of defects that could have easily been prevented using an iterative approach where we jointly work on a single work item, and only after it’s finished move on to the next. But we just don’t. And the delivery date is nearing. As a result, at the delivery date we will likely deliver twenty half-baked work items, instead of delivering maybe ten or twelve complete work items. It’s a choice. End of discussion You could say that the discussions about the chosen approach I have with the project manager are vivid. At the very least. “No, we’re not going to do agile,” he clearly states. “I don’t care if we call it agile or not,” I respond, “But the way we are working now is simply inefficient.” Personally I really don’t care if projects are tagged as agile, Scrum, Smart, RUP or even waterfall. I just dislike inefficient processes. “This approach is easy to explain to the client,” the project manager continues, “So we are going to stick to it. End of discussion.” And then coming out of my girlfriends’ apartment building this morning I saw this car in the parking lot. It occurred to me immediately. We are waterfall dentists.
Imaging you’re a young dentist, fresh out of dentistry school, and you are starting a new practice. Why on earth you would name it The Waterfall I don’t know, but twenty new patients are hurdling up to be treated. Being the waterfall dentist you use an appropriate approach. During the first week you examine all twenty patients. During the
second week you commence to the first part of their treatment: drilling holes. Then in the third week, all twenty patients come back and have the holes filled. In the fourth week all patients return again for a last check-up. Just testing if all holes are filled properly. Finally, in the fifth week you get to send twenty bills to twenty happy patients. Project done. Next project. Good bookkeeping and new patients Doesn’t that sound good? This approach resembles my current project in great detail. So why is it, although this model is easy to explain to patients, no dentist in the world would even slightly consider to apply it? Well, first of all, your patients don’t like to walk around with unfilled holes in their mouths for a least a week. Second, your patients will have to come in to your office four times, and you will have to do pretty good bookkeeping to remember each patient’s details when they return. If you would have filled a patients tooth right after you’ve drilled a hole, this need for documentation and good memory wouldn’t have been necessary. Moreover, what if a new patient comes in during the second or third week with a terrible toothache? You would have to decline him saying: “Sorry, I don’t take additional work now,” advising the new patient to come back after all current patients are serviced. It’s the vicious change request. And what if you become ill during? Your whole schedule will certainly slide. As a result, none of the patients will have been fully serviced at the end of the scheduled period, leaving them in uncertainty about when you will deliver. Basically, you will lose your patients as fast as they hurdled up. And you won’t be able to send any bills. I could probably spent another two posts in explaining why the waterfall model isn’t applied by dentist anywhere in the world. But I won’t. We just might Dear project managers, despite the fact that the waterfall model is so easy to explain to clients, that doesn’t make it any more effective or efficient. Armed with The Waterfall Dentist Anti-Pattern, you can explain iterative, or agile, approaches to your clients just as easy. In fact you can even service your clients better, as they may send in new urgent patients during the project, and you will be able to service those patients early. And even though you may turn ill during the project, servicing the patients one by one, rather than in a returning batch, you can actually fully service a large number of them within the designated timebox, instead of having serviced all of them half when the deadline approaches. I would much rather be able to send bills to fifteen out of twenty patients, then not being able to bill at all. So where’s my current project standing? Well, just like the waterfall dentist might actually serve twenty patients during the current phase, we might actually deliver all twenty work items. But I wouldn’t bet my gold teeth on it. •
magazine voor software development 17
DESKTOP
Lammert Vinke
Onderhoudbare GUI tests met Coded UI Continuous Delivery wordt de laatste jaren steeds populairder. Bij Continuous Delivery test een team bij elke oplevering of de applicatie nog functioneert conform de requirements. Daarom voert een team de testen vaker uit gedurende de lifecycle van applicatie. Daarmee loont het steeds meer om testen te automatiseren. Met end-to-end testen via de GUI kan je (deels) een invulling geven aan deze testautomatisering. Een kenmerk van end-to-end testen is dat de te testen applicatie volledig is geϊntegreerd met de andere systemen. Er zijn verschillende tools die je hierbij kunnen helpen. In dit artikel zal ik in gaan hoe dit mogelijk is met behulp van Visual Studio in combinatie met Coded UI. Een vaak terugkerende vraag daarbij is, hoe blijft een geautomatiseerde test onderhoudbaar? In dit artikel laat ik daarnaast ook zien hoe je een onderhoudbare testset realiseert. Test Automation Pyramid De literatuur beschrijft al veel over verschillende soorten geautomatiseerde testen. Mike Cohn schrijft onder andere over de Test Automation Pyramid. Deze pyramide beschrijft dat er naast geautomatiseerde end-to-end testen via GUI er ook nog andere mogelijkheden zijn om de applicatie geautomatiseerd te testen. Andere mogelijkheden zijn Unit Testen of Integratie Testen. Elk van de hiervoor genoemde type geautomatiseerde testen heeft bepaalde kenmerken en heeft een specifiek (test)doel. De Test Automation Pyramid geeft aan dat je meer Return on Investement behaald wanneer er (a) meer Unit Testen zijn dan Integratie testen, en er (b) meer Integratie Testen zijn dan end-toend GUI testen. Kortom realiseer zoveel mogelijk (en als nodig) geautomatiseerde testen op een zo laag mogelijk integratie niveau. Daarnaast geeft de pyramide aan dat handmatig uitgevoerde testen nodig blijven. Denk bijvoorbeeld aan Usabilty en Exploratory testen.
Dit artikel is dan ook geen pleidooi om alle end-to-end testen via te GUI te testen of te automatiseren. Maar om end-to-end testen via de UI alleen in te zetten voor het doel waar deze het meest waardevol zijn. En probeer deze testen eerst op een lager integratie niveau te testen, bijvoorbeeld op het niveau van een service of klasse. Voorbeeld web applicatie Alhoewel end to end testen wellicht minder voorkomen dan Unit Testen en Integration Testen, leveren een set met End-to-End testen wel toegevoegde waarde in sommige scenario’s. Dit omdat deze testen juist andere aspecten van de applicatie testen. Aan de hand van een voorbeeld applicatie illustreer ik in in dit artikel hoe je een onderhoudbare test set realiseert. De voorbeeld applicatie die ik gebruik in dit artikel is een website van een loterij. Met deze website kunnen (a) geϊnteresseerden een lot kopen, (b) kopers controleren wat het lotnummer was en (c) kopers controleren of de hoofdprijs op het door hun aangekochte lot is gevallen. De applicatie is al ontwikkeld; en in dit artikel zullen we een onderhoudbare verzameling test scripts op basis van Coded UI realiseren.
Fig. 2: Demo applicatie
Fig. 1: Test Automation Pyramid
Manual Tests in MTM Op basis van de requirements van de Loterij applicatie maakt een tester als eerst een aantal Test Scripts. Elk Test Script bestaat uit een aantal test stappen. Bijvoorbeeld 1. open de applicatie, 2. navigeer naar de pagina om een lot te kopen, enz. De tester legt de Test Scripts vast met behulp van Microsoft Test Manager, in dit artikel verder afgekort als MTM. Microsoft positioneert MTM als de tool die de functionele tester gebruikt bij het maken van Test Scripts. Deze Test Script voert de functionele tester vervolgens handmatig uit binnen MTM. Voor traceability koppelt de tester elk Test Script in MTM aan een requirement. Op deze wijze geeft TFS in rapportages inzicht in welk deel van de requirements al is getest, ook wel de functional test coverage genoemd. Ook kan een teamlid of stakeholder zo herleiden in hoeverre requirements al door een tester zijn getest. Bij onderhoud
magazine voor software development 19
DESKTOP
levert dit ook het voordeel dat bij wijzigende requirements gelijk inzichtelijk is welke Test Scripts ook aangepast moeten worden.
Manual Test naar Coded UI Test Op basis van de Action Recording die is gekoppeld aan een Test Script genereert de developer een Coded UI Test uit in Visual Studio. Daarbij genereert Visual Studio onder andere een UIMap. Een UIMap is een afspiegeling van de te testen applicatie in code. Wanneer er aanpassingen nodig zijn in de UIMap, bewerkt de developer de UIMap met een daarvoor in Visual Studio beschikbare editor (Zie Figuur 4 voor een weergave van deze editor) of voert de developer aanpassingen uit in de gegenereerde code-behind file van de UIMap. Voor elk van de stappen in het Test Script dat de tester maakte in MTM genereert Visual Studio een zogenaamde UI Action. Daarom is het van belang om bij de test uitvoer in MTM de uitgevoerde stappen in de Test Script juist af te vinken. Voor de verschillende controls in de front-end bevinden zich in de UIMap zogenaamde UI Controls. Een UI Control beschrijft in code (als een property) hoe het test framework een control in de GUI vindt. De verschillende UI Actions gebruiken vervolgens de elementen in de UI Control Map. Naast de UIMap genereert Visual Studio een Coded UI Test. De Coded UI Test roept de verschillende UI Actions in de juiste volgorde aan.
Fig. 3: Testen in MTM In een Test Script gebruikt men in MTM vaak Shared Steps. Wanneer een aantal opeenvolgende stappen herhaaldelijk voorkomen in verschillende Test Scripts, kan je daar een Shared Step van maken Een voorbeeld van een shared step is: het openen van de browser en het navigeren naar de webapplicatie op een specifieke url (Zie ook de rood gearceerde stappen in Figuur 3 voor de Shared Step “Open Startpage” die de browser opent en navigeert naar de startpagina). Door Shared Steps op een slimme wijze op te zetten, kan je een beter te onderhouden verzameling Test Scripts opzetten. Ook kan het gebruik van Shared Steps bijdragen aan een hogere productiviteit. Je kan een Test Script namelijk veel sneller realiseren door verschillende Shared Steps te gebruiken. Vervolgens kan de tester deze Test Scripts met behulp van MTM uitvoeren. MTM opent daarvoor de Test Runner met daarin het gemaakte Test Script in uitvoer modus (Zie Figuur 4). Per stap in het Test Script voert de tester de genoemde acties in het Test Script uit. Wanneer een stap geslaagd is, markeert de tester de stap met geslaagd en anders met gefaald. Op deze wijze worden de resultaten van de uitvoer van Test Scripts vast gelegd in TFS.
Fig. 5: Coded UI Test Builder Fysiek gezien genereert Visual Studio de volgende bestanden uit (Zie ook Figuur 5): • De Coded UI Test, deze roept de verschillende UI Actions aan. (Zie KopenVanEenLot.cs, WatIsMijnLotNummer.cs en HebIk Gewonnen.cs). Dit bestand mag de developer aanpassen, want Visual Studio overschrijft dit bestand niet bij het opnieuw genereren van een Coded UI test. Deze klasse is vergelijkbaar met een normale Unit Test, echter een Coded UI Test roept de UI Actions in de UIMap aan. Doordat het een soort Unit Test is, kan de developer ook de Data Driven Tests gebruiken; • De UIMap.uitest bevat een XML file met daarin de structuur van de UIMap. Dit bestand verwijst naar de verschillende uitgegenereerde elementen in de UIMap.Designer.cs;
Fig. 4: Test Runner Terwijl de tester de Test Script uitvoert neemt MTM een zogenaamde Action Recording op. Een Action Recording is een opname van de acties die tester uitvoerde bij het runnen van een Test Script. Deze Action Recording gebruikt de tester om het Test Script in volgende iteraties automatisch af te spelen, dit noemt men ook wel Fast Forwarding. Bij het gebruik van Fast Forwarding controleert de tester zelf de resultaten nog wel handmatig. De Action Recording kan later door de developer worden gebruikt om een Coded UI test te generereren. Een Coded UI Test voert de verificatie stappen ook automatisch uit.
20
MAGAZINE
Fig. 6: Gegenereerde Coded UI Test • De UIMap.Designer.cs is een C#/VB bestand. Dit bestand bevat al de UI Actions (Uitgegenereerde methoden) en de UI Control Map (Properties). Deze file werkt nauw samen met het Coded UI Test Framework. Deze file mag niet worden aangepast, want dit bestand
DESKTOP
genereert Visual Studio opnieuw bij wijzigingen in de UIMap; • De developer gebruikt de De UIMap.cs wanneer hij custom code wil gebruiken om functionaliteiten aan de UIMap toe te voegen, zonder daarbij de UIMap.Designer.cs aan te passen. De UIMap is dus een partial class. Onderhoudbaarheid Er verschillende type wijzigingen in het Test Object te bedenken (in dit artikel de Loterij Applicatie) die leiden tot aanpassingen in de Coded UI Test. Maar hoe goed gaat een Coded UI Test met deze wijzigingen om? Om dit inzichtelijk te maken benoem ik hieronder verschillende scenario’s:
De Coded UI Tests gebruiken dan vervolgens de gerefactorde Page Objects. De Coded UI Test in Visual Studio ziet er qua structuur hetzelfde uit als het Test Script in MTM, namelijk voor elke stap in het Test Script bestaat een functie (een UI Action) in de Coded UI Test. In MTM gebruikte de tester echter ook Shared Steps, dat de onderhoudbaarheid van de test set in MTM vergroot. Dit concept kan het teamlid ook bij het refactoren van de Coded UI gebruiken. Bij het refactoren van de UIMap kan het teamlid voor elke Shared Step in MTM ook een Shared Step in Coded UI maken. Net als in het Test Script roept de Coded UI Test ook Shared Steps aan, alleen dat zal dan een Shared Step in .NET code zijn.
1. De programmeur van de applicatie wijzigt een id van een control in de Loterij applicatie. Degene die de test code ondehoudt, past dan ook het id van het control aan in de UIMap Editor. Dit kan in de UI Control Map via de zogenaamde Search Properties;
Fig. 7: Page Object Pattern en MTM Op de voorgestelde wijze ligt de documentatie van de geautomatiseerde test set vast in MTM, namelijk in de vorm van Test Scripts. Dat houdt dus ook in dat de impact van wijzigingen in requirements op de geautomatiseerde Test Scripts in Coded UI eenvoudig te traceren is, dit doordat deze Test Scripts zijn gekoppeld aan requirements.
2. De business bedenkt nieuwe functionaliteiten voor de Loterij applicatie die het team realiseert in een tweede release. Voor deze nieuwe functionaliteiten kan je met behulp van Visual Studio 2012 een deel van de UIMap opnieuw genereren. Dit kan opnieuw op basis van een Action Recording. De ervaring hierbij is dat Visual Studio dan UI Actions en UI Controls dubbel uitgegenereerd. Dit kan omdat deze UI Actions of UI Controls al bestonden voor een vorige geautomatiseerd Test Script. In dat geval kan een teamlid de dubbel gegenereerde elementen verwijderen uit de UIMap met de UIMap Editor. Het teamlid moet dan ook de Coded UI Test zelf aanpassen; 3. Naarmate de Loterij applicatie qua functionaliteit groeit, zal het aantal controls in de applicatie ook toenemen, en zal ook de UIMap groeien. De ervaring is dat dit leidt tot een UIMap die slecht te onderhouden is. Dit komt omdat de controls dan minder gemakkelijk terug te vinden zijn. Voor grotere systeem geldt dat Microsoft adviseert om een UIMap te creëren per functioneel onderdeel van het systeem. De hierboven beschreven scenario’s leiden dus relatief snel tot een minder goed onderhoudbare test set gebaseerd op Coded UI. Om dat te voorkomen adviseren wij om de gegenereerde code te refactoren naar een onderhoudbare test set. Daarvoor is het goed om gebruik te maken van het al bewezen Page Object pattern. Wat is het Page Object pattern? Voor elke pagina in de applicatie maakt de developer een Page Object. Dit Page Object bevat al de gegenereerde UI Controls en UI Actions vanuit de UIMap voor één specifieke pagina. Deze refactoring stap dient de developer handmatig uit te voeren, nadat de UIMap is gegenereerd.
Integratie in build Veel teams willen een continu inzicht in de kwaliteit van het ontwikkelde product. Dat kan met behulp van het build systeem. Om dit te realiseren configureert een teamlid een build die naast het builden van het product ook geautomatiseerd al de Test Scripts uitvoerd. De build voert de Test Scripts geautomatiseerd uit, door de gekoppelde Coded UI Test uit te voeren. TFS geeft inzicht in de kwaliteit door middel van verschillende kwaliteits indicatoren. Door de al eerder benoemde koppeling met requirements bepaalt TFS bijvoorbeeld eenvoudig de functional test coverage. Zo helpt TFS een continu inzicht te geven in de kwaliteit van het systeem. Conclusie Microsoft biedt een goede combinatie van tools waarmee verschillende disciplines met elkaar kunnen samen werken om te komen tot een goede geautomatiseerde test set. Op basis van requirements maakt de tester het design van de geautomatiseerde test set in Microsoft Test Manager op basis van Test Scripts en Shared Steps. Op basis van Action Recordings die zijn opgenomen door een tester genereert een developer in Visual Studio Coded UI Testen. De developer refactort deze gegenereerde testen naar een onderhoudbare test set, dit met behulp van het Page Object pattern. Door een goede koppeling tussen de geautomatiseerde test set, Test Scripts / Shared Steps en requirements blijft de impact van verschillende type wijzigingen eenvoudig te bepalen. Kortom verschillende tools waarmee de verschillende disciplines samenwerken om te komen tot een onderhoudbare testset. •
Lammert Vinke Lammert werkt inmiddels ongeveer vijf jaar bij Info Support als IT Consultant. Hij specialiseerde zich de afgelopen jaren in de Visual Studio product suite en SharePoint. Lammert heeft deel genomen aan het Technology Adaption Program van Visual Studio 2012 vanuit de test discipline. magazine voor software development 21
DELPHI
Cary Jensen
JSON with Delphi Part 2 In the previous issue of the Software Development Magazine, I introduced you to Delphi's support in JSON, which is provided primarily from the DBXJSON unit (DataSnap.DBXJSON in XE2 and later). In that issue I demonstrated how to create JSON objects, as well as parse them. In this article I continue this discussion with a look at several advanced JSON topics. I begin with a look at Delphi's framework for marshaling and unmarshaling the data in Delphi classes. Included in this discussion is the use of custom converters and reverters, which are necessary when you need to marshal data not supported natively by Delphi. I conclude by sharing a JSON external viewer debugger visualizer, which you use to examine JSON objects from within Delphi's debugger.
property Two: Integer read FTwo write FTwo; constructor Create; end; implementation constructor TDataObject.Create; begin FData := 'Delphi';
Marshaling and Unmarshaling While you are always free to create and parse your own JSON objects, Delphi also includes a more powerful set of classes that you can use to package up a Delphi class as a JSON object (marshal) and then later unpack that JSON object back to a Delphi class (unmarshal). These classes are TJSONMarshal and TJSONUnMarshal, and they can handle some of the more common data types that you might encounter in a Delphi class. These classes are found in the DBXJSONReflect unit.
FOne := 'One'; FTwo := 2; FTimeCreated := now(); end;
The following event handler uses a TJSONMarshal class to marshal an instance of TDataObject to a JSON object, after which the text of the JSON object is written to a TMemo. procedure TForm1.DefaultMarshalToJSONClick(
Marshaling in Delphi is actually part of an open framework that you can extend well beyond JSON usage, and it makes extensive use of the extended RTTI (runtime type information) that was introduced in Delphi 2009. Using the classes that ship with Delphi, you can marshal many common fields of a class, so long as that class has a zero parameter constructor (so that RTTI can be used to instantiate the class). These classes internally use helper classes that are referred to as converters and reverters. Converters transform fields into JSON values, and reverters convert these values back to their original types. Currently, Delphi supplies converters and reverters for the following types: integer, string, char, enumeration, float, and object.
Sender: TObject); var jo: TJSONValue; Marshal: TJSONMarshal; DataObject: TDataObject; begin DataObject := TDataObject.Create; try Marshal := TJSONMarshal.Create( TJSONConverter.Create); try jo := Marshal.Marshal(DataObject); try
Let's begin by looking at a somewhat simple example of marshaling. This demonstration makes use of a simple class named TDataObject, whose declaration is shown here. This class also includes a constructor that initializes the object. The constructor is shown in the following code segment as well.
Memo1.Lines.Text := jo.ToString; finally jo.Free; end; finally
TDataObject = class
Marshal.Free;
strict private FOne: String;
end; finally
FTwo: Integer; FTimeCreated: TDateTime; protected
DataObject.Free; end; end;
FData: String; public property One: String read FOne write FOne;
22
MAGAZINE
The following event handler demonstrates how to get a new instance of the TDataObject class based on the string representation of the JSON object created with the preceding event handler.
DELPHI procedure TForm1.DefaultUnmarshalToObjectClick( Sender: TObject); var jo: TJSONValue;
given that Delphi stores TDateTimes as Extended values. Other parsers that might have an opportunity to read this value have no reason to believe that it is a timestamp value. Even if it did, it might not know how to convert this number back to the original timestamp.
UnMarshal: TJSONUnMarshal; DataObject: TDataObject; begin Memo2.Clear; jo := TJSONObject.ParseJSONValue( Memo1.Lines.Text) as TJSONObject; try
Custom Converters and Reverters There is an even larger potential problem here. Specifically, the value of FTimeCreated was converted because it is a type of float value, which the default converters can handle. Fields that are not supported by Delphi's built-in converters will not be stored, unless you create a custom converter and a corresponding reverter.
UnMarshal := TJSONUnMarshal.Create; try DataObject := TDataObject( UnMarshal.Unmarshal(jo)); try
Custom converters and reverters are defined by calling an appropriate register function, to which you pass a method. Once registered, Delphi's marshaling mechanism will call your converters and reverters, from which you handle the conversions.
Memo2.Lines.Add('One: ' + DataObject.One); Memo2.Lines.Add('Two: ' + IntToStr(DataObject.Two)); Memo2.Lines.Add('FData: ' + DataObject.FData); Memo2.Lines.Add('FTimeCreate: ' + FormatDateTime('c', TDateTime(DataObject.FTimeCreated))); finally
This is demonstrated in the following two event handlers. In the first method, after creating the TJSONMarshal object, the RegisterConverter method is called, to which the object, the field name, and an anonymous method that handles the conversion, are passed as arguments. When Marshal is called, this registered converter is used in place of the default converter. Similarly, in the second event handler, after a reference to the UnMarshal object is obtained, a custom reverter is registered to handle the conversion from a string back to a TDateTime.
DataObject.Free; end;
procedure TForm1.MarshalWithConverterClick( Sender: TObject);
finally UnMarshal.Free; ebnd; finally jo.Free; end; end;
var jo: TJSONValue; Marshal: TJSONMarshal; DataObject: TDataObject; begin DataObject := TDataObject.Create; try Marshal := TJSONMarshal.Create(
As you can see from this code, care was taken to prevent memory leaks, which is useful in case an exception is thrown during the execution of the code.
TJSONConverter.Create); try Marshal.RegisterConverter(TDataObject, 'FTimeCreated', function (Data: TObject; Field: string): string
This code can be found in the JSONDemo project, which is an extension of the project that I introduced in the last issue of the SDN Magazine. You can download this source code from the following URL:
begin
http://www.jensendatasystems.com/code/jsondemo.zip
end);
Result := FormatDateTime('c', TDataObject(Data).FTimeCreated); jo := Marshal.Marshal(DataObject); try Memo1.Lines.Text := jo.ToString; finally jo.Free; end; finally Marshal.Free; end; finally DataObject.Free; end; end; procedure TForm1.UnmarshalWithReverterClick(
Fig. 1: JSON created by marshaling a Delphi object, as well as data displayed after restoring the Delphi object from JSON.
Sender: TObject); var jo: TJSONValue;
The output created by this code is shown in Figure 1. One thing to notice in particular is how the FTimeCreated value was marshaled. In short, it was stored as a TJSONNumber, which is understandable
UnMarshal: TJSONUnMarshal; DataObject: TDataObject; begin
magazine voor software development 23
DELPHI Memo2.Clear; jo := TJSONObject.ParseJSONValue(Memo1.Lines.Text) as TJSONObject; try UnMarshal := TJSONUnMarshal.Create; try Unmarshal.RegisterReverter(TDataObject, 'FTimeCreated', procedure(Data: TObject; Field: string; Arg: string)
The custom converter and reverter pair used in the preceding example to convert a TDateTime to a format based on the short format for dates followed by the long date format for time, as defined by the ShortDateFormat and LongTimeFormat global variables, which is represented by the 'c' parameter that appears in the first parameter of the FormatDateTime function. While this should serve you well on a single machine, it is probably inadequate for uses where your JSON object is created on one machine and reverted on another.
var RTTIContext: TRTTIContext; RTTIField: TRTTIField; TimeCreated : TDateTime; begin TimeCreated := StrToDateTime(arg); RTTIField := RTTIContext.GetType( Data.ClassType).GetField(Field);
In those cases it is best if you use a common format, such as the ANSI standard format for date/time values. This format uses the pattern 'yyyy/mm/dd hh:mm:ss'. Daniele Teti of BitTime has blogged about the use of custom converters and reverters, including one that converts and reverts TDateTime values to the ANSI standard date format. You can read more about these at http://www.danieleteti.it/ 2009/09/01/custom-marshallingunmarshalling-in-delphi-2010/.
RTTIField.SetValue(Data, TimeCreated); end); DataObject := TDataObject(UnMarshal.Unmarshal(jo)); try Memo2.Lines.Add('One: ' + DataObject.One); Memo2.Lines.Add('Two: ' + IntToStr(DataObject.Two));
Some Closing Thoughts About Marshaling As mentioned in my previous article about JSON, JSON objects encode data but not functions. So, how does that affect objects that you serialize using the marshaling framework offered in Delphi? Fortunately, the answer is "not much."
Memo2.Lines.Add('FData: ' + DataObject.FData); Memo2.Lines.Add('FTimeCreated: ' + FormatDateTime('c', DataObject.FTimeCreated)); Memo2.Lines.Add('FTimeCreated as Extended ' + FloatToStr(DataObject.FTimeCreated)); finally DataObject.Free; end; finally UnMarshal.Free; end;
It is true that methods and properties are not included in the JSON object created by Delphi's marshaling. However, they are not needed. This is because marshaling is used with classes that Delphi understands. When reverting from JSON to the Delphi object, Delphi creates a new instance of the original Delphi object, after which it assigns the values of the member fields, even those that are strictly private. The created object will have the methods defined for the class, and properties that rely on direct access to member fields should work properly. Those properties based on accessor methods should be fine, too.
finally jo.Free; end; end;
Figure 2 show how the main form of the sample project appears when the custom converter and reverter are used. Notice that in the top memo field the date appears in a human readable form. It is also in a form that could conceivably be evaluated by languages other than Delphi to resolve the stored date/time.
There is a second issue, however. Should you use JSON marshaling to move data from one process to another? When your data is being consumed by non-Delphi sources, using JSON is a sound way to provide that data in a more or less universally readable format. However, when the data is being consumed by Delphi on both ends of the conversation, and data is the purpose of the transmission, I personally prefer to use the persistence capabilities of the Client DataSet instead. After loading a ClientDataSet with data, which can include nested datasets, blob fields, the metadata that describes it all, and sometimes even a change cache, that information can be rendered to text simply by reading the XmlData property of the ClientDataSet. On the receiving end of the data, assigning the text to the XmlData property of a newly created ClientDataSet results in an object indistinguishable from the original. And, if you prefer a stream to text, write the ClientDataSet to a stream using its SaveToStream method, and restore the data by loading that stream into a new ClientDataSet by calling LoadFrom Stream. It saves a lot of time, and does not require the services of custom converters and reverters.
Fig. 2: SON and restored Delphi object data making use of custom converters In the bottom memo in Figure 2 you can see the values that can be read from the newly composed object, which was created from the JSON shown in the top memo. Here I display both the formatted version of the TDateTime value, as well as its Extended representation.
24
MAGAZINE
A JSON Debugger Visualizer I find that I use JSON quite a lot in my work, especially when I am working with DataSnap servers that use a REST (REpresentational State Transfer) interface. In those cases, it is common to consume JSON from the REST server, and to send data to the REST server using JSON. If you are like me, one of the problems that you might encounter is remembering the structure of your JSON objects. In projects like the JSONDemo project shown in Figures 1 and 2, it is not such a big problem because the event handlers actually display the JSON strings. However, in the real world, you might have to resort to adding the
DELPHI occasional ShowMessage here and there to get a look at your JSON. There is an alternative, first introduced in Delphi 2010. You can add a debugger visualizer. A debugger visualizer permits you to see a human-readable form of data that normally the debugger cannot display. Delphi 2010 included two such visualizers. One of these, the TDateTime visualizer, is a value replacer visualizer, which simply displays a formatted date/time when viewed in the debugger, instead of the Extended value that TDateTime is based on. The TStringList visualizer is an external viewer visualizer, and it displays some or all of a TStringList in a non-modal dialog box, when you ask it to.
JSON objects will appear in the debugger once this visualizer is installed. Click on the magnifying glass symbol to invoke the JSON debugger visualizer. If the text of your JSON object is greater than 4K in length, the first 4K will be displayed in the resulting non-modal dialog box, along with a message indicated that the output is truncated. Otherwise, the entire JSON text will be displayed. An example of JSON being displayed in this debugger visualizer is shown in Figure 4. •
My JSON debugger visualizer is an external viewer visualizer, and I have shamelessly adapted the source code from the TStringList visualizer in creating it. You can find the source code for this visualizer from the following URL: http://www.jensendatasystems.com/code/jsondemo.zip Like the TStringList visualizer, the JSON visualizer is limited to only 4K worth of text. This is the same limited imposed by the TStringList visualizer, and I have kept that limitation in belief that it might be necessary for the safe operation of the visualizer within Delphi's debugger. The JSON debugger visualizer is very simple, but very useful. In fact, when I first created the visualizer I had visions of creating an external viewer that would display the JSON in a hierarchical control such as a TreeView. I started the project on my own time, but only got at far as displaying the ToString text of the JSON object in a memo field. I then ran out of time and went back to my paid work, forgetting about the visualizer altogether. It was some time later when I was coding at a client site that I stumbled upon the visualizer again. I was working with JSON, and needed to see what the current JSON looked like. I was about to stop the debugger and add a ShowMessage statement to my code when I noticed the tell-tale sign of a debugger visualizer, a magnifying glass symbol, next to my JSONAncestor objects. At first I thought "Wow, Delphi already has a JSON visualizer. I guess I wasted my time trying to create a new one of my own." It was when I clicked on that magnifying glass symbol that I realized that this was my visualizer, and the displayed text immediately answered all of the questions I had about the JSON object's structure. I can honestly say that the visualizer has saved my countless hours, and I hope that you benefit as much from it as I have.
Fig. 4: JSON text being displayed in the JSON debugger visualizer
Cary Jensen Cary Jensen is the bestselling author of more than 20 books on software development, including Delphi in Depth: ClientDataSets, and winner of the 2002 and 2003 Delphi Informant Reader's Choice Award for Best Training. A frequent speaker at conferences, workshops, and seminars throughout much of the world, he is widely regarded for his self-effacing humor and practical approaches to complex issues. Cary's company Web site is at: http://www.JensenDataSystems.com. Cary has a Ph.D. from Rice University in Human Factors Psychology, specializing in human-computer interaction.
To install the JSON debugger visualizer in Delphi 2010 and later, download the code from the link I provided earlier and open this project. From the Project Manager, select Install. Figure 3 shows how
TIP: Windows Toetsen voor Windows 8 • • • •
• •
Windows toets: kom je in het Start scherm Windows toets + D: Desktop mode Windows toets + Q: Query mode oftewel zoeken Windows toets + I: Instellingen, afhankelijk van Desktop of Start zie je de Control panel. Windows toets + C: oproepen Charms, de vijf iconen aan de rechterkant Windows toets + Z: Als een Modern UI app een App bar heeft, dan roep je die zo op.
Fig. 3: The magnifying glass symbol reveals the existance of an external debugger visualizer
magazine voor software development 25
GENERAL
Michiel van Otegem
Mijn hart zingt voor Visual Studio 2012 In de vorige editie schreef ik nog over beveiliging met OAuth of OpenID en hoe makkelijk dat is in de nieuwe versie van ASP.NET WebPages. Voor MVC sprak ik toen nog over de noodzaak om DotNetOpenAuth op te halen van NuGet en zelf wat code te schrijven. In de final release van Visual Studio 2012 is het allemaal nog makkelijker! Als fervent voorstander van Identity Federation ben ik daar uiteraard heel blij mee. Het was een kleine verrassing toen ik ontdekte dat de templates in de final release van Visual Studio 2012 aangepast waren. Daar waar ik met de Release Candidate me nog in enige bochten moest wringen om in een MVC applicatie in te loggen via Google, was dat in de uiteindelijke versie ineens veel makkelijker geworden. Wanneer je een nieuw ASP.NET WebForms of ASP.NET MVC project aanmaakt in Visual Studio 2012 kun je kiezen uit verschillende templates. Standaard staat de Internet Application template geselecteerd en deze template biedt voor zowel WebForms als MVC out-of-the-box ondersteuning voor inloggen met Facebook, Google, LinkedIn, Twitter, Windows LiveID en Yahoo. Elk van deze zogenaamde Identity Providers is met slechts één regel code te activeren. De “magie” van hiervan zit in de Microsoft.Web.WebPages. OAuth.OAuthWebSecurity class, die in feite een wrapper is om DotNetOpenAuth heen, waarmee het gebruik ervan nog simpeler is dan ik in mijn vorige column liet zien. De OAuthWebSecurity class bevat verschillende static methodes die beginnen met “Register”, om een Identity Provider te registreren. In AuthConfig.cs in de map App_Start staan de regels voor de meeste providers al in commentaar, dus alles wat je hoeft te doen is deze actief maken. Ook bevat OAuthWebsecurity naast de voorgedefinieerde registratiemethodes voor de populaire providers ook generieke RegisterClient methodes voor OpenID en OAuth, waarmee je Identity Providers kunt registreren die niet voorgedefinieerd zijn. Dit staat min-of-meer gelijk aan wat ik de vorige keer besproken heb, hoewel de methodiek anders is. Dit komt omdat de template al zo geavanceerd is, dat Identity Providers die je in
AuthConfig.cs aanmaakt automatisch getoond worden aan gebruikers als inlogmethode, zoals je kunt zien in afbeelding 1. De pagina in afbeelding 1 is zoals die in de template zit. Er is niets aan veranderd. Het enige wat ik gedaan heb is in AuthConfig.cs een custom client toegevoegd met als display naam ASPNL.com. Die wordt automatisch getoond wanneer je de applicatie start. Het zelf maken van een OAuth of OpenID client voor OAuthWeb Security gaat even iets te ver voor deze column. Voor het grootste deel van de webapplicaties is dat ook helemaal niet nodig; de zes voorgedefinieerde providers kun je aardig mee uit de voeten. Ik beperk me hier tot het aansluiten op Twitter en Google. De eerste is voor de meeste providers min of meer gelijk. Google is nog eenvoudiger, want die werkt zonder een specifieke applicatie ID en key. Alleen de betreffende regel in AuthConfig.cs actief maken is genoeg. Gewoon even doen en dan de applicatie starten en naar het loginscherm gaan. Inloggen via Twitter Om in te kunnen laten loggen via Twitter moet je eerst een applicatie aanmaken in Twitter. Dit kun je doen op de developer site (https://dev.twitter.com). Klik op Create an app en login met je Twitter account. Je moet dan een naam en omschrijving van je applicatie opgeven en de URL van de homepage. Een Callback URL specificeren heeft geen zin, omdat ASP.NET die negeert. Het vervelende van de URL is dat dit geen localhost kan zijn. Je hebt een geldige URL nodig. Je kunt dit doen door in je hosts file te gaan zitten pielen, maar er is een elegantere oplossing. Een paar slimme jongens hebben het
De Consumer Key en de Consumer Secret zorgen voor een Trust Relationship tussen Twitter en jouw webapplicatie
Fig. 1: Loginscherm met verschillende login diensten
26
MAGAZINE
domein localtest.me geregistreerd en de DNS daarvan wijst voor alle subdomains naar 127.0.0.1… je lokale machine. Op http://readme. localtest.me kun je hier meer over lezen. Het betekent echter dat bijvoorbeeld sdndemo.localtest.me naar je lokale machine wijst. Uitstekend om op te voeren bij Twitter voor testdoeleinden. Twitter
GENERAL
accepteert ook geen poortnummers, dus je moet je applicatie wel hosten via IIS en registreren met een host header of als Default Site instellen. Als je alles hebt opgevoerd, de voorwaarden geaccepteerd hebt en op Create Twitter application geklikt hebt, krijg je de beheerpagina te zien van de Twitter applicatie. Hier kun je te zijner tijd ook wijzigingen maken. Kopieer de Consumer key en Consumer secret van de Details pagina en gebruik deze in RegisterTwitterClient zoals in Listing 1. public static void RegisterAuth() { OAuthWebSecurity.RegisterTwitterClient( consumerKey: "fIPRhs89Nshhaz998YQ" consumerSecret: "VH2ajS89dfg547688..."); }
Listing 1: Je Twitter app registreren als identity provider.
De Consumer Key en de Consumer Secret zorgen voor een Trust Relationship tussen Twitter en jouw webapplicatie. Twitter gebruikt ze om gegevens te versleutelen zodat alleen jouw applicatie weet hoe deze te ontsleutelen. Daardoor weer Twitter zeker dat de gegevens naar jouw applicatie gaan en weet jij zeker dat de gegevens van Twitter komen. Zoooo makkelijk! Nu het zo makkelijk is om providers als Facebook en Twitter aan te sluiten als externe providers, waarbij je nog steeds de mogelijkheid kunt geven om gebruikers een “lokale” gebruikersnaam en wachtwoord aan te maken, heb je echt geen excuus meer om het niet te doen! Je gebruikers zullen het geweldig vinden (en ik ook)! •
SDN > Update Microsoft Surface
Windows Phone 8
Op 25 oktober was de launch van Windows 8 live te volgen via een internet webcast. Steven Sinofsky en Steve Balmer deden deze launch vanuit New York. Hij kondigde onder andere dat de upgrade iets meer dan 30 euro ging kosten. Ook liet hij diverse nieuwe Windows 8 hardware zien. Als je nog niet gekeken hebt naar Windows 8, dan weet je dat de Windows Store voor Microsoft Design Style apps daar te downloaden zijn. Deze Store zit
Tijdens een special Windows Phone 8 event op 29 oktober in San Francisco gaven Joe Belfoire, Steve Ballmer en Jessica Alba de beschikbaarheid van Windows Phone 8 SDK en de Windows Phone 8 aan. Dit is een enorme mijlpaal voor het Windows Phone platform. Lang is er gewacht op de SDK en op de werkelijke functionaliteiten.
al lekker vol, er zijn veel bekende programma’s (Skype, Onenote, Lync), games (Angry Birds, Wordament) en tools (WebRadio, Buienradar) te vinden. Ook was 25 oktober de launch van Windows RT for ARM computers. Microsoft fanaten denken dan gelijk aan de Microsoft Surface. Helaas is deze nog niet te koop in Nederland. Maar kon je net als ik niet wachten, dan kun je hem uiteraard in een van de buurlanden bestellen. Kijken even http://www.wp7.nl/24403/microsoft-surfacert-kopen voor de verschillende mogelijkheden en hun voor/nadelen. Ik heb er zelf een besteld in Duitsland en daar zit een QWERTZ touch cover bij. Als je blind kunt typen, dan is het niet zo erg hoor ;-) Zo’n touch cover is wennen met typen, maar werkt uiteindelijk best lekker. Wil je meer reactie van je toetsen, dan kun je kiezen voor de Type cover. Deze is iets dikker, maar heeft toetsen ala enkele laptops.
28
MAGAZINE
Zoals Microsoft het placht te vertellen, de telefoon is weer persoonlijk. De verbeteringen en nieuwigheden zijn oa de interface; de tegels kunnen nu in drie verschillende grootten veranderd worden en dus naar eigen smaak. Het plaatje op het start scherm kan nu dynamisch veranderen als je het koppelt aan je facebook account. Joe’s kinderen lieten de Kids Corner zien. Hiermee kun je de telefoon met een gerust hart aan je kinderen geven, zonder dat ze per ongeluk een foute sms of telefoontje plegen. Updates op het OS komen in de toekomst direct van Microsoft en niet zoals bij Windows Phone 7 via je provider. De Browser is geupgrade naar Explorer 10. Er is een Microsoft Wallet, waar je credit kaart gegevens etc kunt opslaan. Aan de hardware kant zijn de eisen opgerekt, de schermresolutie is groter, betere processoren worden ondersteund Quad core oa, geheugen kan uit gebreid worden met een SD card en NFC doet zijn intrede. Dat laatste is handig om eenvoudig gegevens met andere NFC gebruikers uit te wisselen. Inmiddels is de HTC 8X te koop bij de PDA shop. Andere toestellen en merken zoals Nokia 920 komen op een later moment.
Wedge Keyboard en Muis
Vind je de touch cover of type cover van Microsoft Surface toch niet prettig, dan kan ik je de Wedge Keyhoard en Wedge Muis aanraden. Deze twee bluetooth devices werken perfect met de Surface, maar kunnen natuurlijk ook op andere laptops/computers gebruikt worden. Het toestenbord is voorzien van speciale Windows 8 toetsen. Uit eigen ervaring kan ik melden dat hij erg prettig typt. De mouse is even wennen, je houdt hem tussen duim en ringvinger vast.
CLOUD
Marcel Meijer
Windows Azure Portal Updates Het is zoals het hoort in de Cloud omgeving. Met regelmatige updates komen er features en updates naar de Windows Azure portal. Sinds vandaag zijn er ook weer updates bij gekomen op de Windows Azure portal. Hier een kleine opsomming. Updates 1) Met de Windows Azure portal kun je de settings uit de ServiceConfiguration.cscfg file op de portal aanpassen. In het begin waren deze veldjes net iets te klein en dat is nu aangepast.
2) Op de Storage pagina’s kon je niet zoveel. Natuurlijk heb je verschillende tools van third party leveranciers, maar het aanmaken van een container zou perfect vanaf de portal kunnen. Met deze refresh is dat ook toegevoegd. Dat scheelt weer een switch naar een andere omgeving. Nog mooier, kun je het ook vanaf je mobile phone aanpassen/toevoegen.
Ook het simpel deleten van een blob is nu mogelijk vanaf de portal.
Toevoegingen 1) Service bus Het beheren van de Servicebus is naar de nieuwe portal gekomen! Dit is een geweldige toevoeging. Niet alleen kun je een Service bus subscription maken, maar ook Queues / Topics en Relays kunnen beheerd worden op de portal. Dat is erg mooi.
2) Import / Export Databases Via de Silverlight portal kon je je Windows Azure SQL Databases ook exporteren naar je storage en importeren vanaf je storage. Dat is nu ook mogelijk van de HTML portal. •
Hopenlijk komen er ook nog mogelijkheden om Windows Azure tables en Windows Azure queues een beetje te beheren vanaf de portal. 3) Er was al een menu item voor het beheren van de settings. Deze is nu uitgebreid met het aanmaken van Co-administrators. Tot op heden moesten we daarvoor nog naar de Silverlight versie van het portaal, nu is dat niet meer nodig.
magazine voor software development 29
DESKTOP
Clemens Reijnen
Getting testing done in the sprint Always challenging to create a piece of a software system which fulfills a customer need, ready for use. Especially when it should be realized in an iteration of just three weeks, from idea till a fully functional and tested piece of the application. Conform the ‘Definition of Done’ as it is called in the Scrum approach. Agile approaches embrace short iteration cycles where releasable pieces of the software system are created. Releasable means also tested. Unit tested, system tested, functional tested, acceptance tested and often also performance and load tested. A goal to make the item ready for use, is that the team wants feedback. Feedback how they are doing and if it is what the customer really needed. Many teams find this a troublesome challenge and aren’t successful in it and deliver half-done products or a make workaround for the agile approach (in Scrum often called ‘Scrum but’). Many agile workarounds bring down the goal why a team adopted an agile approach. For example a standalone test team or a separate test iteration to get the testing done will result in less agility and less feedback loops. This article will give five tips how a clear practice with the support of tools (this article uses Microsoft Application Lifecycle Management tools as an example, but the tips are valid for any other ALM tool suite ) will help teams be more successful in delivering done products when using an agile approach. Actually many tips will also be helpful for other methodologies and project approaches. Many readers will possibly think “tools, wasn’t that evil in agile? People, interactions versus tools and processes”. That’s half correct, it isn’t evil and yes interactions are very important and solve miscommunications way better as tools and processes ever can. But, tools can help, tools and practices can support a way of working. Application Lifecycle Management tools suites, integrated tools with a central repository for all involved rolls support collaboration between rolls. Collaboration between artifacts these rolls create and teamwork between the work these rolls execute. As a resend Gartner report writes: Driven by cloud and agile technologies, the ALM market is evolvingand expanding. See Gartner Report [1] http://www.gartner.com/technology/reprints.do?id= 1-1ASCXON&ct=120606&st=sb Tip 1: Get a team Actually not a tip, it is a must. This is a kind of obvious but not common and the hardest thing to accomplish. Get a team, get testing knowledge in your team. When you don't have it, you will fail. Teams and companies have failed to reach their agile software development goals only just because it was impossible to get different disciplines together in a team. For example, the code implementation is done in an agile way, with
30
MAGAZINE
scrum boards and daily standups together with the customer. Because the customer wanted to be more flexible in want is needed in the system. Testing is done in a separate iteration and cadence because this role was the responsibility of a different department. Bugs where found in functionality realized sprints ago, testers needed more detailed requirements descriptions because they didn’t understand the backlog items, pushing the customer in the corner to be more descriptive and fixed till testing was done. The customer loses all the flexibility he needed and gets frustrated. Just a simple example how it could go wrong, when you don’t have a team, and there are thousands more. It isn’t easy to accomplish a collaborative environment where all roles work seamless together. Testers and developers are different, a nice quote from this ‘test’blog [2]: In the D-world, the world of the Developers, we think Generalist Testers are pencil-pushing, nit-picky quality geeks. Mostly they are beside the point and are easily replaced. They seems to like making much noise about little defects, as if we made those errors deliberately... In the T-world we don't hate the Developers for their perceptions. We are disappointed about the poor quality of the software. Bad assumptions on the part of Developers are more to blame for the problems than are software weaknesses. We never (or seldom) get software what will work right the first time. No, in the T-world we think that developers forget for whom they are building software, it looks like they are building for themselves... Try to combine these two worlds in one team, you definitely need to come up with a Collaborative Culture: The three most important concerns are: • Trust A topic closely associated with trust when it refers to people is Identity. • Collaborative culture A collaborative culture consists of many things, including: - Collaborative leadership; - Shared goals; - Shared model of the truth; and - Rules or norms. • Reward A “reward” for successful collaboration is most often of a non-financial nature.
DESKTOP
Show me the value, seems to be the magic word. Test adds knowledge, knowledge during the grooming of the backlog. Helping the product owner with defining proper acceptance criteria. Testers can help find improper written backlog items, finding inconsistencies in the flow of a business case for example. A great test activity in the TMap testing approach can help, assessing the test base. (TMap is a test management approach which structures the testing effort by providing different phases and tasks, see TMap.NET for more details). Very simple explained: find bugs in the requirement. In both ways the test role helps the product owner and the team to focus on value.
acceptance criteria. This tip is close to the second practices, when the team is able to capture acceptance criteria in logical test cases they will benefit from it. During the release planning meeting, capture acceptance criteria and immediately add them as logical test cases linked to the product backlog item will help the team to understand, clarify the discussion and even more important benefit of this tip, it helps testers be involved, and be important at the early stages of the software cycle. With product backlog items you could use beside the use story style for the description (or title )
Tools can help, the Visual Studio 2010 TMap Testing Process template give test activities a more important place, helping the tester to get on board.
-- As a [role] I want [feature] So that [benefit] – also a same kind for acceptance criteria -- Given [context] And [some more context] When [event] Then [outcome] And [another outcome] --.
Visual Studio Process Templates are supporting a way of working. It contains several work items types with a flow. For example, a bug work item type can go from the state ‘new’ to ‘assigned’ to ‘resolved’ and ‘verified’. Such a work item can hold a lot of information, supporting the work that need to be done to bring the work item to the next state. A process template is easy to customize and work item type fields, flow, validation and rights can be edited. Creating a new type is also supported. For example the TMap Testing Process Template has an additional type “Test Base Finding”, helping the management of problems found in the test base (backlog). The ‘Testing’ Tab with test activities, next to the implementation tab.
Acceptance criteria are written in a scenario way. SpecFlow (see SpecFlow website [3]) a Behavior Driven Development tool also uses this way of describing scenario’s, from where it binds the implementation code to the business specifications. Tools can help to immediately create and link test cases to a backlog item. Having them present for further clarification and ready to get physical specified with test steps. Visual Studio Process templates support this kind of scenario. A Product Backlog Item have the fields ‘description’ and ‘acceptance criteria’ (see image).
Fig. 2: Product backlog item form in TFS Fig. 1: Test Activities in Team Foundation Server Still two different worlds in this way, but it gives a good visual reward of being connected. Probably many teams won’t need an additional visualization of testing effort and can use the scrum process template in combination with their testing methodology, this will help them to get started.
But, also can contain linked test case. Create them from the Tab ‘Test Cases’ and give them a meaning full title.
Interesting is the manual test tool ‘Microsoft Test Manager’ (MTM) in Visual Studio. It helps teams to get more connected, it shows the pain points where the collaboration isn’t seamless. So, adopting MTM can be a good start for agile teams to get testing aboard. But, be aware the interactions are more important as tools. The tools won’t fix bad collaboration, mismatching identities, lack of trust and won’t give any reward. Tip 2: Write logical acceptance tests In the previous tip “get a team” already the benefit of having testing knowledge onboard during the requirements gathering was explained. Two practices are mentioned: assessing the test base and help with
Fig. 3: linking logical test cases with product backlog items
magazine voor software development 31
DESKTOP
You can re-use the logical test cases in Microsoft Test Manager by creating a sprint Test plan and add the backlog item to the test plan, the logical test cases will appear in your test plan, ready for further specification. Once I tried to implement this practices in a project the testers didn’t agree. They were afraid the developers only would implement the functionality that was written in the logical test cases, knowing on forehand what was going to be tested seemed a bad I idea for them. For sure I had to work on Tip 1 first before the team could move forward.
Tip 3: Use a risk and business driven test approach When there is no risk, there is no reason to test. So, when there isn’t any business risk, there aren't any tests and is it easy to fit testing in a sprint. More realistic, a good risk analysis on your product backlog items before start writing thousands of test cases is a healthy practice. Also in scrum is risk an important attribute. The release plan establishes the goal of the release, the highest priority Product Backlog, the major risks, and the overall features and functionality that the release will contain. Products are built iteratively using Scrum, wherein each Sprint creates an increment of the product, starting with the most valuable and riskiest. Product Backlog items have the attributes of a description, priority, and estimate. Priority is driven by risk, value, and necessity. There are many techniques for assessing these attributes. From the scrum guide
Collecting a good regression set is important. There are a lot of approaches how to get this regressions set, most of them are based on risk classifications and business value (see previous tip). The principle is that from each test case a collection of additional data is determined into the test cases for the regression test are ‘classified’. Using these classifications all cross sections along the subsets of test cases can form the total test that are selected. From TMap [4] “A good regression test is invaluable.”
Automation of this regression set is almost a must (see next tip: test automation). Making a good selection which test cases to select is a trivial task. With excel you can do some querying for prober test cases but this gets harder when they are in different documents. Having good query tools so you can easily make a selection (and change this selection) which test cases are part of the regression run makes testing more efficient. A team I supported had more than 15.000 test cases distributed over about 25 feature test plans and 10 scrum teams. For the execution of a the regression set a query needed to be run over all test cases to make a good meaning full selection for the regression set. Test cases in Team Foundation Server are stored as work Item Types in the central database brings, which has powerful query capabilities. You can write any query you want, save it and use it for your regression test selection. The team I supported used query based test suites to save the selections.
Within the TMap test approach product risk analysis is an important technique. Determine risk analyses are part of the proposed activities in the Master Test Plan of TMap: ‘Analyzing the product risks’. It not only supports the Product Owner to make the right decisions it also gives the team advantage in a later stage. Risk classification is invaluable while defining the right test case design techniques for the Product Backlog Item. “The focus in product risk analysis is on the product risks, i.e. what is the risk to the organization if the product does not have the expected quality? ” www.TMap.net
Having a full product risk analysis for every Product Backlog Item during the Release Planning meeting is slightly overdone, but the major risks should be found. Determine product risks at this stage will also provide input for the Definition of Done list. Within the Visual Studio Scrum 1.0 Process Template Product Backlog Items are written down in the Work Item Type ‘Product Backlog Item’. This Work Item Type hasn’t got a specific field for risk classifications. Adding a risk field is easy done. But, most important for fitting testing in a sprint, know the risks use test design techniques to cover the risk and only write useful test cases. Tip 4: Regression Test Sets In the same context as tip 3 you can think of regressions test sets. Some teams rerun every test every sprint, this is time consuming and isn’t worth the effort. Having a clear understanding what tests to execute during regression testing raises the return of investment of the testing effort and gives more time to specify and execute test cases for the functionality implemented during the current sprint.
32
MAGAZINE
Fig. 4: Microsoft Test Manager Query based test suite on priority Microsoft Test Manager has an interesting capability to control the amount of regression testing that need to be done during the sprint. A feature called “Test Impact”, gives information about test cases which are impacted by code changes. (see MSDN documentation [5]) Tip 5: Test Automation All validation activities (test) cost time and money. So, every activity to test a system should be executed as efficient as possible (see previous tips). Adding automation to the execution of these validation saves execution time, which saves money. But the creation and specially maintenance of test automation cost time and money. So, the hard question is “what and how should we automate for our system validations”, where is the break-even point of test automation in the project.
DESKTOP
ROI of test automation is a challenge. We have to think of how long is the test automation relevant in our project (for example not all functional tests aren’t executed every sprint, only a sub set, only the most important, see this post ‘only meaningful tests’) and how many times is the validation executed (how many times over time and also on different environments). This gives us indications how much effort we must put in our test automation.
The good thing, when a shared steps changes, you only have to record that one again.
Basic there are three test automation levels: 1. No Automation 2. Test Case Record and Playback 3. Test Scripts Visual Studio adds two other levels to it. 1. No Automation 2. Shared steps with Record and Playback (action recording) 3. Test Case Record and Playback (action recording) 4. Test Scripts (Generated Coded UI) 5. Test Scripts (Manual created Coded UI) And probably any other test automation tool will add his own value, let’s focus on Visual Studio. Fig. 5: Playback share steps in Microsoft Test Runner All these automation levels have an investment, and a maintainability level. The better you can maintain a test case the longer you can use it for your ever evolving software system. That is the connection between ‘how long’ and ‘how well maintainable’, another connection is the effort it takes to create the automation. The resulting benefit is, you can execute your script over and over again. The ideal situation: a test script with very small investment to create, for a test which needs to be executed the whole life of the application which doesn’t change overtime. No investment, no maintainability issues, maximum amount of executions. Result: maximum ROI. Too bad, we’re not living in the ideal world. So some tradeoffs need to make. 1. No automation. No need for maintainable test scripts, no automation investment. I have customers who use Microsoft Test Manager for test case management only, and they are happy with it. Maintaining thousands of test cases and their execution gathering information about the test coverage of the implemented functionality. In most situations, this is an ideal starting point for adopting Microsoft Test Manager and starting to look at test case automation. As a test organization getting used to the benefit of integrated ALM tools which support all kind of ALM scenarios. 2. Shared Steps with Action Recording | Record Playback parts. Collecting an action recording takes some effort. You have to think upfront what you want to do, and often you have to execute the test case several times to get a nice and clean action recording. So there is some investment to create an action recording which you can reuse over and over again. In Microsoft Test Manager you can’t maintain an action recording. When an application under tests changes or when the test cases changes you have to record every steps again. A fragile solution for automation. Using Shared Steps (reusable test steps) with their own action recording solves this a bit. Find the test steps which appear in every test case, make a shared step of these steps and add an action recording to it. Optimize this action recoding and reuse the shared step in every test case. This definitely improves the ROI. Now you can fast forward all the boring steps and focus on the real test.
Creating multiple shared steps with action recoding and compose a test case is also a good scenario. After the zero investment, this is a good next step. You get used to the behavior of action recordings and have the benefit of reusing them throughout the project. Action recordings of shared steps keep their value the whole project, there is some effort to create and maintain them but you will execute them for every test case, a good ROI. 3. Test Cases with Action Recordings | Full Test Case Record Playback. The same activity as for the Shared Steps action recordings. But, you will use the action recording less and it is harder to maintain (more test steps). So defiantly the ROI is much lower as in the Shared Steps situation. The scenario where you create the action recording and execute often, for example on many different environments will gives benefits. Microsoft Test Manager action recordings can be recorded on one environment and playback on another environments. Another reason you might want to go with this scenario, is that you want to reuse the action recording for test script generation. See next step. 4. Generate test script from action recording. A really nice scenario for quickly creating test automation scripts. See this How To video [6] The maintainability of the generated code is hard. There are some tools in place to react on UI changes, which make it easier. With Visual Studio 2012 default the Code UI Map Test Editor is available to edit search critiera, rename controls and create methods.
Fig. 6: Visual Studio UIMap editor
magazine voor software development 33
DESKTOP
A challenging task, due to the UI Map XML, are some common development practices like branching, merging and versioning of the test script. Conclusion, the creation of test scripts generated from action recordings is really fast but hard to maintain. This together with the recording of the action recording (number 2) has its influences on the investment. 5. Write your own Test Script (by using the Coded UI Framework). Write the test automation script yourself, following all the good coding principles of maintainability and reusability. Like: separation of concerns, KISS principle, don’t repeat yourself, etcetera. The Codeplex project Code First API Library is a nice starting point. This automation scenario is completely opposite of the generated test script (3). This one is hard to create, it will take some effort, but is (if implemented well) very maintainable, and you can follow all the coding practices and versioning strategies. So, Microsoft Test Manager with Coded UI support different test automation scenarios. From fast creation with some maintainability (2 and 3) pay offs, to harder creation with better maintainability (4). It is a good to think up front about test automation before random start using the tools. My rules of thumb are: Use 3 and 4 in a sprint and maybe in a release timeframe, but not longer. Maintainability will ruin the investment. Use 5 for system lifetime tests. They run as long as the system code runs and should be treated and have the same quality as that code. Don’t use it for tests you only run in a sprint, the effort will be to big. Use 1 and 2 always whenever you can. It supports several ALM scenarios and the shared steps action recording really is a good test record-playback support with some good ROI. Closing Five tips how to make your testing effort more efficient. And, they not only would work in agile projects, they will work in all types of project. Only in agile projects you will feel the pain earlier when things go less efficient as planned. Not only these tips will bring your project benefit,
Dave Smits is de Laatste tijd erg actief op het Windows Store forum op MSDN en hij kwam deze leuke vraag tegen. Als uitkomst van het volgende stuk code verwachtte de vraagsteller een uitkomst, maar kreeg iets anders. De vraag was waarom. /// <summary> /// A1/A3/A4/A9 /// private async void Button_Click_1( object sender, RoutedEventArgs e) { Debug.WriteLine("A1"); await ThreadPool.RunAsync((operation) => { DoOperation("A").Wait(); }, WorkItemPriority.Normal); Debug.WriteLine("A9"); } /// <summary> /// B1/B3/B9/B4 /// private async void Button_Click_2( object sender, RoutedEventArgs e) { Debug.WriteLine("B1"); await ThreadPool.RunAsync(async (operation) => await DoOperation("B"), WorkItemPriority.Normal); Debug.WriteLine("B9"); } private async Task DoOperation(string s) { Debug.WriteLine("{0}3", s); await Task.Delay(1); Debug.WriteLine("{0}4", s); }
34
MAGAZINE
but I encourage you to create and try your own tips and improve every sprint. On www.ClemensReijnen.nl I’ve written several more tips, feel free to add yours. References [1] See Gartner Report http://www.gartner.com/technology/reprints.do?id=11ASCXON&ct=120606&st=sb [2] "I'm Living in 2 Worlds", Rob Kuijt http://robkuijt.nl/index.php?entry=entry080329-160337 [3] Specflow http://www.specflow.org [4] TMap website, regression tests http://www.tmap.net/en/news/good-regression-test-invaluable [5] MSDN Test Impact http://msdn.microsoft.com/en-us/library/dd286589(v=vs.110).aspx [6] Youtube Coded UI Create Test Automation http://www.youtube.com/watch?v=V1RiN3EDcw4. •
Clemens Reijnen Clemens Reijnen is a management consultant at Sogeti, specializes in Application Lifecycle Management, he gives Visual Studio ALM trainings around the globe, created the certified Agile TMap testing process template for TFS2010, the TMap for TFS Windows Phone 7 App and is a frequent speaker at conferences. He is vice president of the Dutch IASA chapter. His experience encompasses a deep knowledge and experience in software development. You can catch Clemens on his technical blog at www.clemensreijnen.nl. Clemens also co-authored the book, Collaboration in the Cloud: How Cross-Boundary Is Transforming Business.
Button_Click_1 geeft natuurlijk netjes A1, A3, A4, A9 omdat await ThreadPool zal blijven wachten tot de thread eindigt. Button_Click_2 eindigt de thread al na het printen van B3 en zal B9 dus voor B4 worden geprint. Dit maakt meteen mooi het verschil tussen Wait() en het keyword Await duidelijk. Wait blokkeert de thread terwijl await juist niet blokkeert alleen het werk later opnieuw ingescheduled, waardoor dus de thread eindigt in geval van Button_Click_2 voordat B4 is aangeroepen. In de bovenstaande geval is met een ouderwetse threadpool gewerkt en wanneer die vervangen wordt door Tasks, komt er weer iets opmerkelijks: private async void Button_Click_3( object sender, RoutedEventArgs e) { Debug.WriteLine("T1"); await Task.Run(async () => await DoOperation("T")); Debug.WriteLine("T9");
In dit voorbeeld wordt dezelfde aanpak als in Button_Click_2 gebruikt, enkel is de threadpool door een task vervangen. De uitkomst is echter niet gelijk aan die van 2 maar aan die van 1, T1, T3, T4 T9. Tasks zijn geen Threads en dat wordt hier erg goed helder. De Task wordt netjes afgewacht voordat de Task van Task.Run eindigt en T9 word geprint. Link naar de orginele post: http://social.msdn.microsoft.com/ Forums/en-US/winappswithcsharp/thread/161bdd6c-74a5-485d8293-63c18afd9dd2/#2c2f49c0-083c-4969-9520-20f0bae5172f
SDN > Update
Delphi Nieuws Delphi XE3 Nieuws 5% Korting Sinds het vorige nummer van SDN Magazine is RAD Studio XE3 uitgekomen met daarin oa. Delphi XE3, C++Builder XE3, Prism XE3 en een nieuwe tool HTML5 Builder XE3 genaamd.
RAD Studio Mobile Het goede nieuws is dat het erop lijkt dat deze nieuwe RAD Studio Mobile gratis (of tegen lage kosten) beschikbaar zal komen voor ontwikkelaars van RAD Studio XE3 Professional of hoger met een subscription, of ontwikkelaars van Delphi Enterprise (of hoger) met een subscription. Voor Delphi Professional zou er een lage prijs gelden, zie de RAD Studio Mobile Roadmap te http://edn. embarcadero.com/article/42544 voor details. Nog een reden om de volgende keer subscription te overwegen (of te verlengen)...
op Delphi XE3
In samenwerking met reseller Bob Swart Training & Consultancy (eBob42) kunnen alle SDN leden een korting van 5% korting krijgen op hun Delphi Upgrade of New User licentie via http://www. delphixe.nl/SDN of http://www.eBob42.com/ SDN als ze er meteen een subscription bij nemen (let op: deze korting geldt alleen voor de licentie zelf, en niet voor de subscription. De korting geldt alleen voor nieuwe bestellingen, en is niet van toepassing op reeds geplaatste orders). De korting kan oplopen van een kleine €25,voor een Delphi Professional Upgrade tot een paar honderd Euro voor een RAD Studio Architect New User (en zelfs meer als er een 5 of 10 User Pack wordt aangeschaft natuurlijk).Een subscription wil zeggen dat je gedurende de (verlengbare) looptijd alle nieuwe versies van de betreffende tool(s) meteen gratis geleverd krijgt. Daarnaast heb je recht op drie zgn. "incident support calls" met Embarcadero voor problemen waar je zelfs met je reseller niet uitkomt.
Missing in XE3 Action
Naast iOS zal RAD Studio Mobile ook Android ondersteunen (later in 2013). Op vrijdag 15 februari is er een Delphi Mobile Development event in de planning door Bob Swart in samenwerking met de SDN. Hou de website (en mailbox) in de gaten voor meer details.
Delphi XE3 is uitgekomen, maar vergeleken met Delphi XE2 zijn er enkele onderdelen die ontbreken. Zo is Rave Reports verdwenen uit het product, en worden ontwikkelaars aangeraden om naar Fast Reports of QuickReports uit te wijken. Er werd al enige tijd geen support meer gegeven voor en door de makers van Rave Reports, dus waarschijnlijk is deze "wijziging" er eentje die juist veel frustraties kan voorkomen in de toekomst.Daarnaast is Embarcadero Prism een beetje gekortwiekt door het wegsnijden van de DataSnap en dbExpress connectivity. Er is nu dan ook geen Prism XE3 Enterprise meer (XE 2.5 was de laatste Enterprise versie van Prism), maar alleen nog een Prism Professional. Net als de Oxygene versies die door RemObjects gemaakt en verkocht worden. Alleen krijg je er dan ook Oxygene for Java en Nougat bij (voor iOS en OS X support). Embarcadero Prism wordt niet meer los verkocht, maar alleen nog als onderdeel van RAD Studio. Tot slot is er geen ondersteuning meer voor iOS in Delphi XE3. De gehele FireMonkey for iOS is verdwenen (alhoewel mensen met XE3 nog wel de beschikking hebben over XE2 om daarmee via FireMonkey for iOS te werken). De ondersteuning voor iOS via FireMonkey zullen we begin 2013 terugzien in een nieuw product dat voorlopig onder de werknaam RAD Studio Mobile bekend staat.
Embarcadero MVP Na Microsoft is ook Embarcadero begonnen met een MVP ("Most Valuable Professional") programma. In Nederland zijn Bob Swart (van Bob Swart Training & Consultancy en voorzitter Delphi Track van de SDN) en Danny Wind (The Delphi Company) de twee MVPs. Andere bekende namen uit het SDN circuit zijn Marco Cantù, Cary Jensen, Brian Long, Ray Konopka, Filip Lagrou. In totaal een dikke 60 Delphi "eveagelists" wereldwijd. Voor meer informatie, zie: http://www.embarcadero.com/embarcadero-mvp-program
magazine voor software development 35