Top
Top
LibraryEASYProcess Reference ManualUsing EASYProcessTesting Best Practices

Testing Best Practices

No feature is complete without testing. The feature may be entirely done and beautifully designed, but if it doesn’t perform the expected action when done, no one will know any of the good aspects of it. It is hard to restore faith in your development among your testers and users, so by temporarily placing yourself in their shoes and viewing your work from their perspective you can try to identify bugs that would negatively impact how your feature is perceived.

Think about how your feature will be used in a production environment. So far the little tests you have done along the way have been from the point of view of the developer. As the developer, you need to perform several rounds of testing yourself and put yourself in the role of developer, tester, and end user to use the feature as intended.

It might be difficult to get yourself in this mindset. It may be helpful to get input from someone else. You could also try teaching someone else how to use your feature. Even if you do not have anyone to teach, practice how you would teach someone to use your feature. This might help you take a step back and approach it as you would as a user instead of a developer.

By following the testing best practices below you can minimize unnecessary time spent in UAT testing and set your application up for success. Remember that testing sheds light on the bugs that exist in the feature. Make sure that each bug is recorded somewhere or fixed so no data is lost from the testing phase.


Error Handling
Clean Up
Use Cases (Unit Testing)
System Integration Testing
Supported Browsers
User Acceptance Testing (UAT)
Documentation

Error Handling

Whenever you are developing a feature, you build it for one use case: everything works as expected. After you finish development for that case, it is important to take a step back and review what you have built. You now need to ask “what could go wrong?” and add in some error handling for those cases.

Every feature has multiple use cases. Your error handling can start from the user perspective. If you asked them to fill in a text box and they didn’t, do you have enough validation to know when this mistake is made? Is there a case to catch this mistake and display an appropriate message to the user? Next you can think about the back end of your feature. If you write a database query and one result should return, you might have one path expecting that result. What if no results return or many results return?

If your feature includes processes, look at the process canvas. If it is a straight line with no splits for failure cases, you probably need to add some. Stand-alone processes should always end in an “Output” evaluate. Make sure as your process splits into multiple paths, each path ends with an “Output” evaluate and a “Terminate”. You should have both Success and Failure cases.

Here is a list of some common error cases. Think about if they are related to your development and if so, plan for this case and add some error handling.

  • WebParts
  • Form Validation (Use regular expressions in WebPart fields)
  • Strings entered in Quantity textboxes
  • Non-dates entered in date fields
  • Required fields
  • Exceeded character limits
  • Default Buttons defined for Text Boxes (which button is clicked if a user hits “Enter” when in a text box?)
  • More complex rules based on your specific form (might require button process validation)
  • Processes
  • Database Services
  • Delete
  • No records were deleted. If there should have been a record, is this enough of an error to display a message? Some deletes are precautions that want to delete records if they exist and it is not a big deal if they don’t.
  • Update/Insert
  • Expected a record to update, but it didn’t exist. Should this be changed to an Upsert? Or return an error message?
  • Expected no records and Insert throws a Primary Key Constraint exception. Should this be changed to an Upsert?
  • If you allow the user to name something that becomes part of a database record primary key, should there be validation to confirm the name chosen is not already taken?
  • File Services
  • Did you expect a file to be uploaded? What if the user did not provide one? You should check first.
  • Do you stream a file to the user’s local computer? Are you sure that file will always exist?
  • More Complex rules based on your specific Process


Clean Up

Before development is complete, remember to clean up your Processes and Webparts. Clean up is an important step because you may remember everything now, but in the future you may need to relearn how your feature works and messy or unnamed canvases make that learning process harder.

Processes

  • Straight Lines/Lined Up Services: During development, you may not have maintained straight lines in your process and after working on it for some time, the arrows and services may be a bit messy. Try to work from the start, down in a single column, until the column gets to be too long, then wrap back up to the top. In this way, processes can get to be very large before you cannot see it fully on the screen.

During Development

After Clean Up

  • Delete Floating Services: During development, you may have dragged services onto the process canvas that you didn’t end up using. Now is a good time to go through them and delete the ones you no longer need. If you do still want to keep services that are not connected, you can organize them off to the side so it is obvious that you intentionally kept them. If you want to provide some notes for a future developer, you could create an evaluate and write some notes about what you changed, why you changed it, how the logic works, and why you saved the floating services.

During Development

After Clean Up

  • Connect To Terminate: After the testing step, your processes may no longer be one straight path. You may now have binary decisions to handle your error cases. Remember to connect all of your process ends to a “Terminate” service. You can have multiple “Terminate” services on a process canvas.

During Development

After Clean Up

  • Name your Services: Ideally, you have named your services along the way to be meaningful to you and the logic you are performing. However, if you haven’t yet, it is best to do this now while you still know how everything works. This is because other services that reference it by name also need to update the reference once the name changes. As you work on other features created by other developers, you will appreciate the names because it is difficult to tell what is happening without it.

During Development

After Clean Up

  • Create a “Prevalues” Evaluate: It is a good idea to create an “Evaluate” at the beginning of your process to store variables that you will be referencing in your process. You can name this anything you like, but “Prevalues” is a suggestion.This is useful in case you want to change the change the name of the label you are pulling from the page. If you do this, you will need to change the reference in the process. If you have an evaluate at the beginning that is actually pulling the value from the page, you will just have to change it here and the rest of the process is actually referencing WorkData/Prevalues/LabelName.

Prevalues on Process Canvas

Prevalues Evaluate Configuration

  • Delete (Or rename) Test Processes: During development, you may have copied processes as backups or created test processes. This might be something you encounter when making complex changes that require a lot of user input to create a scenario you are testing. Rather than recreate that through the website each time, you might have created a process to set up that scenario and then start the logic you are testing. After your logic is incorporated into the site as intended, this test process will still exist, but will never be needed again. Future developers may not know if it can be deleted or not and will leave it just in case. This is especially true if you copied an existing process, because the name will be something meaningful and appear like it is used somewhere on the site. As part of clean up, remember to either delete these processes or rename them to “Do Not Use” or “Test [Developer Name]” so you can use this as your test process again in the future.

WebParts

  • Name Sections: Naming sections is usually just more visually pleasing for future developers. They are all referenced by an Id from process canvases, so if the section is not set to display the header on the page, then it will only show on the WebPart canvas. Still, as WebParts you work on become more complex, all the “New Section” descriptions will be hard to navigate or direct others to various areas in your WebPart.
  • Field Names: Field names are referenced from processes, so hopefully at this point you have chosen clean, meaningful names for your fields. If you copied them during development, since names must be distinct, it would have added a number to the existing name in order to create a copy. Make sure these are no longer needed. Remember which fields you have referenced from within a process because the reference will need to change as well. This is why it is a good idea in your processes to create an “Evaluate” at the beginning to store the values you are going to be using during your process (values from the page).

During Development

After Clean Up

  • Delete (Or rename) Test WebParts: During development, you may have copied WebParts as backups or created test WebParts. After all the WebParts that will be used in the feature are integrated into the site (referenced in WebPages/hyperlinks), find all the WebParts that are not used and either delete them or rename them to “Do Not Use” or “Test [Developer Name]” so you can use them as your test WebPart again in the future. If you do not do this, future developers may not know if it can be deleted or not and will leave it just in case. This is especially true if you copied an existing WebPart, because the name will be something meaningful and appear like it is used somewhere on the site.

Files

  • Delete Test Files: We haven’t done anything with files yet, but when you begin working with file saving, downloading, moving, etc, you will be creating many test files. These will remain on the server you are working on until someone deletes it, so it is best to delete the files while you still remember which ones are not needed.
  • Delete Test Directories: We also have created any directories, but just like files, you when you start working with these, you will create many test folders. In the future, other developers will be wary of deleting folders because if a process requires it to be there, it will not work once it is deleted. In order to not risk this, developers will rather leave the folder where it is even though it appears to not be used. After you have finished working on your feature, clean up any folders you have created.


Use Cases (Unit Testing)

As part of error handling you tried to look at your feature and see other error cases. Now, you can look at your feature and think about all other use cases. This is part of Unit Testing. You want to find all the ways to use your feature and confirm the feature itself works in every case. Some of these will be correct use that don’t result in failure and others may be additional error cases that you missed during the error handling step.

Create a table of all possible uses. This may help you better visualize all the different uses for your feature. For this simple training example, it might be easy to just think through all the cases and test them without creating a table, but it is good to get in the habit of doing this. As the features you build become more complex, this will help you think through all possible scenarios and even share your table with others afterwards to get feedback on more scenarios you did not think of.

As you test each case, mark it as either successful and move onto the next case, or troubleshoot the issue (using logs) to fix it. After you make any changes, you will have to retest all the previous cases you already marked as successful.

Example

In this example, there is a WebPart that displays addresses. The user can filter the addresses in the grid by Address Number or by Name. Once the grid displays the selection the user wants, the desired rows can be selected with checkboxes at the left and the button at the bottom emails address info the selected rows.

A sample table is below and accounts for some of the possible use cases.

WebPart B Use Case

Success?

User load the page for the first time

User filters the list section by Address Number

User filters the list section by Name

User filters the list section by Address Number and Name

User enters nothing in the filter textboxes and clicks the Search button

User checks no checkboxes and clicks the email button

User checks one checkbox and clicks the email button

User checks multiple checkboxes and clicks the email button

User uses the “Select All” checkbox in the header row to select all and clicks the email button


System Integration Testing

Once Unit Testing is complete, you can move on to System Integration Testing. This means you have already confirmed that your feature works as an individual piece. Now, you want to test that your feature works as a single piece in the whole application. Think about the areas where your feature integrates with other aspects of the site. Go through the list below and while doing so keep a list of all the integration points. All the steps you are following in DV, you will follow again in QA and you are creating a testing script for yourself for QA.

  1. Does your feature have buttons? Look in each Button.
  1. If it calls a process you didn’t develop for this feature, make sure it is passing it the correct input in all of your use cases. There may be different users that have access to this button and the input could change depending on the user. Think about any other ways the input could change. The process may also depend on user session variables and these would change when clicked by different users. Ideally any process you call would have a “Prevalues” evaluate at the beginning which shows all the variables that are used. If it does, open it to make sure all the input it receives or needs is provided by your feature in all use cases.
  2. At the end of the button process, does it forward somewhere? Make sure each user type that can access your feature also has access to the page you are forwarding to.
  1. Does your feature have hyperlinks? Look at each hyperlink.
  1. Does it open a file?
  1. Try opening the file. The file needs to exist on the Web Server in the Website folder if its begin pointed to by a hyperlink. If it opens when clicked, you know the file is there.
  1. Does it forward to another page with or without query strings?
  1. Make sure each user type that can access your feature also has access to the page you are forwarding to.
  2. If query strings are passed make sure for every use case those query strings that are referenced are actually being sent.
  1. Does it open a pop-up window?
  1. Is your defined window height and width big enough for the webpart it opens in that window? Are there other locations that open that same webpart in a window? If so, make their window sizing consistent. This only doesn’t apply if your feature opens the webpart with a certain query string that makes the webpart appear a different size than other locations. For example, if an “Add” query string made the webpart appear very large, the window would need to be larger. If elsewhere passes an “Edit” query string and makes the webpart appear smaller, the window is smaller also for that reason.
  2. If query strings are passed, make sure for every use case those query strings that are referenced are actually being sent.
  1. Do other pages link to your feature?
  1. Is there a hyperlink on a page that forwards to your feature? Test this works alright.
  2. Is there a button that forwards to your feature? Test this works also.
  3. Does your feature share database records with other features?
  1. If you can create records from elsewhere that should be viewable on your page, make sure they are. If they shouldn’t be, make sure they aren’t.
  2. If your feature creates the records, make sure the reverse works.
  1. Do other features (that work for the whole application) need to be integrated with your feature?
  1. A feature like “hiding the price site-wide when the admin flips a switch” would need to also hide the price on your feature. Do you need to hide or unhide various fields or information depending on visibility settings related to another feature. There should be documentation somewhere on this. If there is not, you can also inspect other similar fields on the site similar to yours and examine it for additional logic.
  2. A feature like “being able to type in a customer item number anywhere on the site and the site knows what I mean” means any text boxes where you accept item number should also be able to accept a customer item number. There is probably a process which takes in the user entered value and outputs the actual item number if one exists. There should be documentation somewhere for this. If there is not, you can also inspect other textboxes on the site similar to yours and examine it for additional logic.

Supported Browsers

If you haven’t already been developing and periodically testing in a variety of browsers, during the testing phase, it is important to check that your feature does what it is supposed to in other supported browsers. Different browsers might handle CSS or javascript differently. Using typical EASYProcess tools should be safe between browsers, but when you branch out and HTML override a webpart or add your own styling, this may appear differently than you expect and you should be aware of it to make it consistent.

Try to test your code in the most up to date versions of all of our supported browsers:

  • Chrome
  • Firefox
  • Internet Explorer
  • Safari
  • Edge

User Acceptance Testing (UAT)

This is testing by an actual user. This should not be performed by the developer who worked on the feature. As the closest person to the project, you may have a blind spot for a design flaw or unexpected use cases. Hopefully these are caught before this step, but they should definitely be caught in UAT and have a designated tester helps us do this.

Promote to QA

Until this point your testing should have been in DV. Until your testing is complete, you are not sure if your feature is stable. You want to keep QA as stable as possible and close to the Production environment. When it has passed your testing, you are ready to promote to QA. Follow the Promotion Best Practices steps and make sure you keep track of all the things you did to promote from DV to QA. You will follow those steps again when you promote from QA to PD so you are creating a checklist for yourself to make sure the next promotion goes smoothly.

Test Again in QA

All the steps you did in DV should not be repeated in QA. Your Unit Testing and System Integration probably found bugs and prompted you to make changes forcing you to restart your testing. By now, you should have a stable feature that can be tested quicker than when you did this in DV.

Your QA testing makes sure you didn’t miss a step in your promotion and that everything was promoted correctly. If there are issues, that didn’t happen in DV, you can ask “what is different about this environment than the last?” and use this to identify what was missed.

Identify and Prepare your Tester

Identify your UAT tester and think about what they need to know. If your feature is very intuitive, you may only need a short description of what you feature is, what it does, how it works, and how it should be used. This step might also identify a need for documentation and you could take this opportunity to write any necessary documentation needed before passing the feature off to your tester.

Issues Found

The tester should have a way to raise issues that are found in testing. During UAT, there should be lots of communication between the tester and the person who will address the issues. Make sure there is a way for the two to communicate and a designated place for all issues found.

Code Freeze

There should be a period of time that the UAT tester is still actively using the site, but cannot get anything to break and finds no issues. This is when it is ready for the production environment. The code freeze is an important step because it more closely emulates the actual users who will be performing the same actions over and over and still expect the same bug-free response.


Documentation

With the testing, you have had a chance to take a step back and look at your feature from the point of view of a user. Next, you will need to think about who will need documentation on your feature.

  1. Users: Is the feature intuitive enough that an end-user can figure it out or do you need to write a “How To” document explaining how to configure it?
  2. Developers: You cleaned up your work, but that may not be enough to decipher what your logic is doing. Now, you need to put yourself in the place of your future self and other future developers who do not know how your feature works. Are the cleaned up Processes and WebParts enough to help someone walk through your logic? If not, can you clean up your feature more by adding more helpful names to your services? If you feel future developers will still need help, you can add some documentation.

Below are some suggestions to add documentation to Processes and WebParts themselves in ways that will not affect the feature.

Processes

  • Backup Evaluate: If you want to take a backup of some services that you don’t want to delete, but are not used in the process flow, you can set them off the side in your process canvas underneath an “Evaluate” service with the name backup. The name of the service cannot have spaces or special characters in it, but you can distinguish the backup by putting the date after it with numbers and periods to separate the month, day, and year. Inside the backup, you can create an eval also named backup and put any comments you like inside. A good idea is to put your name as the developer and list out the changes made, issues you encountered, or walk through the logic of how and why it works.

Backup Evaluate on Process Canvas

Backup Evaluate Example

  • README Evaluate: If you want your documentation to draw more attention, you could create an “Evaluate” service, name it “README”, and place it at the beginning of the process instead of off to the side.

README Evaluate on Process Canvas

README Evaluate Example

  • XSLT Comments: Within Services, anywhere you can type text, you can place an XSLT comment. EASYProcess will change the color of XSLT comments to green to help you distinguish comments from the text that will be used by EASYProcess. XSLT comments are ignored by EASYProcess
  • Encase your comment like so: <!-- Comment Text -->

Prevalues Evaluate with XSLT Comments

WebParts

  • HTML Generic Control with HTML Comments: An HTML Generic control allows you to type a lot of text into a web control by typing in the “InnerHTML” property. Be careful, because this will appear on the page unless you configure it correctly. Set your html generic control “Active” property to “False”. You can also set the section it is in to also have the “Active” property set to “False”. You can mark your comment as an HTML comment in case it ever were marked as active again, it still wouldn’t show on the page because it is commented out.
  • Encase your comment like so: <!-- Comment Text -->

README HTMLGenericControl on WebPart Canvas

README HTMLGenericControl Example


Powered by EASYProcess (© 2019 K-Rise Systems, Inc).