Books
in black and white
Main menu
Share a book About us Home
Books
Biology Business Chemistry Computers Culture Economics Fiction Games Guide History Management Mathematical Medicine Mental Fitnes Physics Psychology Scince Sport Technics
Ads

Java Tools for Extreme programming mastering open source tools including - Hightower R.

Hightower R. Java Tools for Extreme programming mastering open source tools including - Wiley publishing , 2002. - 417 p.
ISBN: 0-471-20708
Download (direct link): javatoolsforextremeprog2002.pdf
Previous << 1 .. 88 89 90 91 92 93 < 94 > 95 96 97 98 99 100 .. 159 >> Next

Technical Limitations
HttpUnit provides much of the functionality of a Web browser from within a Java application. However, a number of things remain outside its purview. JavaScript is an obvious example. There is no way to verify that a JavaScript function will be called upon form submission, that a rollover will happen, and so on. No add-on to HttpUnit is planned to address this issue, although Russel Gold (the framework's primary author) says in the project FAQ, "If you feel ambitious enough to add JavaScript support yourself, I would be happy to accept submissions" (http://httpunit.sourceforge.net/doc/faq.html#javascript).
HttpUnit does not forgive bad HTML. This feature can cause problems when you test pages that display correctly in major browsers but do not strictly adhere to the HTML specification. Frequently, the problem has to do with <form> tags, which must be nested correctly within tables. Unfortunately, the HTML must be corrected to fix this problem, although calling HttpUnitOptions.setParserWarningsEnabled(true) will at least indicate HTML problems encountered during parsing. See the WebForm class documentation in Chapter 16, "HttpUnit API Reference," for details on this problem.
Spider Example
HttpUnit's Java foundations mean that it's possible to devise all sorts of interesting testing applications. The API merely offers objects for test developers to employ at their convenience. In fact, verifying page output from individual requests (as we did with the sales report page in our previous examples) is perhaps the lowest-order use of the framework. The code for verifying three simple pages was over 100 lines long. Writing a test like that could take several hours, especially as you slowly tweak it to account for wrinkles in the underlying site. To quote one developer I worked with, "You could check the output manually in three seconds." Clearly, writing code like that is not the best option. Instead, test developers should seek ways to automatically verify portions of Web applications without manually asserting every single page. The spider example illustrates this approach by attempting to solve a common problem: how to quickly verify that a new site deployment has no major errors.
Spider Development: First Iteration
Our criteria for basic verification of a deployment is that each user-accessible link will display a page without an exception (HTTP code 500 Internal Server Error). HttpUnit helpfully turns these exceptions into HttpInternalErrorExceptions, which will fail tests for us. So, we have a starting point. If we can retrieve a response for every link on the site without an exception, the site must at least be up and running.
Is this a Valid Test?
Checking all the internal links is not a thorough test for most sites because it omits things like userinteraction, customization, correct page sequence (private.jsp should redirect to the login page), and so on. However, imagine a dynamically generated catalog site. The online catalog contains more than 10,000 products. A link-checking spider could automatically verify that all the product pages are at least displaying— something that even the most dedicated tester would be loath to do. Some added logic to verify the general structure of a product display page could provide testing coverage for more than 90 percent of the accessible pages on such a site.
Using HttpUnit's classes, coding such a spider becomes easy. All we need to do is start at the front page and try to go to every internal link on it. If we encounter a link we have checked before, we ignore it. For each page we successfully retrieve, we check all the links on it, and so on. Eventually, every link will have been checked and the program will terminate. If any page fails to display, HttpUnit will throw an exception and stop the test.
We begin by initializing a WebConversation to handle all the requests and a java.util.HashSet to keep track of which links we have followed so far. Then we write a test method that gets the response for a site's homepage and checks all the links on it:
private WebConversation conversation;
private Set checkedLinks;
private String host = "www.sitetotest.com";
public void setUp(){
259
conversation = new WebConversation(); checkedLinks = new HashSet();
}
public void testEntireSite() throws Exception{
WebResponse response = conversation.getResponse("http://"+host); checkAllLinks(response);
System.out.println("Site check finished. Link's checked: " + checkedLinks.size() + " : " + checkedLinks);
}
The checkAllLinks() method is also simple: private void checkAllLinks(WebResponse response) throws Exception{ if(!isHtml(response)){ return;
}
WebLink[] links = response.getLinks();
System.out.println(response.getTitle() + " -- links found = " + links.length); for(int i =0; i < links.length; i++){
boolean newLink = checkedLinks.add(links[i].getURLString()); if(newLink){
System.out.println("Total links checked so far: " + checkedLinks.size()); checkLink(links[i]);
Previous << 1 .. 88 89 90 91 92 93 < 94 > 95 96 97 98 99 100 .. 159 >> Next