A good bug ticket, for it to be validated, must be as clear, concise and complete as possible. To do this, you must fill in the fields described below according to a precise and rigorous nomenclature.
Here is a golden rule to describe a bug correctly: any information not present in the description of the test step must be included in the bug description to ensure that it is understood.
Title of the bug ticket. It must be concise and explicit. Bear in mind that someone who is not familiar with the project should be able to understand the issue simply by reading the title of the ticket. It may happen that a particular nomenclature is requested by the customer. In other cases, you can use this rule : [Page name/area] Concise description of the bug.
Examples [My account] The profile image is squished vertically [My Orders] With all accounts, the numbers behind the decimal point are not displayed [My Orders] With the account[Data Set], the dates are displayed in the US format [Home page] The login button does not prompt any actions
With information received prior to the project, the PM lays out the sections of the product to be tested (usually the pages or functions of a site). In this case, the tester must indicate in which section his bug is located, by selecting it from the drop-down list of choices (only one possible choice for each bug).
Examples Home page My orders Cart Checkout Process
When testing on different versions of the product, the PM provides the version numbers to be tested (application build or site version). The tester must indicate in which version(s) his bug is located, by selecting them in the drop-down list of choices (several choices possible for a bug). It is possible that the PM will use this "Version" field for other purposes, in this case, follow the instructions given by the PM.
Examples Build v1.0.3 Build v1.0.4 Version 5.1 Version 5.2
This is where the tester describes his bug. This part must contain a precise and accurate description of the defect found during the tests. If necessary, additional information for a good understanding of the defect should be added here (including the Data sets used so that the customer/developer has all the necessary informations).
Examples Absence of the separation bar between several products The help icon "No IMEI" touches the top frame At the scroll down on the "Meals" page, we arrive at an area without content, all white for several seconds before reaching the end of the page When you launch a search for a product that cannot be found in the "Product search" tab, you get infinite loading When filling in the "Change my billing address" form, the "Country of residence" field crashes the application when you click on it.Account used: email: abcd@gmail.com password: abcd1234 When the user arrives on the home page and wants to click on “options”, nothing happens, the link is either dead or no longer current After filling in all the registration information and validating it, the page loads for a few moments, then the registration page is displayed again with an error message in the "Email confirmation" field If the user clicks on "View my last scanned items" he is blocked on the page indicating that he has not scanned an item yet. The user is therefore forced to close the application to be able to start browsing again
Describe here the normal behavior of the product expected by the developers.
For an exploratory test, the tester must rely on their experience to describe the expected behaviour.
For a guided test, the tester must reiterate the expected result of the stage concerned.
Examples The "Menu" popup is composed of the following elements: “Close” button on the top right "Rules of the game" button (in blue) "Terms and Regulations" button (in blue) “Sound” button "Exit the game" button (in red) The "My Account" window opens Each time you click on the left navigation arrow, the previous page is displayed with no apparent issues The confirmation page is displayed after the validation of the registration form. After launching a search for a product that cannot be found, the application indicates this to the user with the message "Product not found" The page indicating that the user has not scanned any items has a "back" button When completing the "Change my billing address" form, the "Country of residence" field displays a list of possible countries of residence The Pop-in window closes
The steps to reproduce indicate all the actions necessary to find or reproduce the defect.
For guided tests, be careful not to use the test description as written in the Test Suite without any reflection. In some cases, the steps from the Test Suite are sufficient, but this description may sometimes differ from the actual steps needed to reach the page and reproduce the anomaly. They must be as precise and complete as possible, containing all necessary information, so that the person in charge of checking the defect can do so without knowing the test environment.
Examples 1. Click on "Store".
2. Fill in the search bar field with the city "Marseille".
3. Select the "Marseille-La Valentine" store.
4. Click on "News".
5. Scroll down to the bottom of the page1. Launch the application and log in to your account
2. Choose a favorite store
3. Go to a PLP (e.g. "Televisions")
4. In the filter box, choose the display grid on two columns
5. Check the display1. Be logged out
2. Click on the register button
3. Enter the requested information except the phone number
4. Confirm by clicking on the "Register" button1. Login
2. Click on "My Cart"
3. Click on the "+" button in order to increase the quantity
We identify 5 main types of bugs: Graphical, ergonomical, functional, performance/stability and wording.
A graphical bug is a bug that alters the display of one or more elements of a tested page. This bug category therefore refers to the visual (format, neatness, layout of elements, etc.).
Examples Blurred, pixelated, cut image Picture in the wrong place, offset Trimmed button The display of an element differs/not in accordance with the reference model Font or line spacing not suitable Text misaligned vertically or horizontally A graphical element protrudes/overflows from the element containing it A graphic element is distorted/stretched/squished A graphic element is too close/stuck to another A graphic element is blurred/pixelated One graphic element is overlapping another A non-functional graphic element is missing A word is cut off/displayed on two or more lines One graphic element is not aligned either vertically or horizontally with respect to the others A watermark is present on an image Non-consistent lateral margins
An ergonomic bug concerns an anomaly related to a bad implementation of the UI. It reports a problem related to a feature that is difficult to access. The element works but its layout or appearance is not optimal for the user.
This type of bug is rare, and is brought up especially when a suggestion is made.
Examples The back button is not in the same place from one page to another The validation button is red in color, a color commonly accepted as a cancellation or stop. It should be more of a green colour The sequence of pages is not logical
A functional bug concerns an anomaly involving a function not being performed or deviating from the expected result. This is a bug that alters the functioning of one or more elements.
The absence of a feature is also a functional bug.
Examples A function giving no result or a different one than expected Unable to enter text in editable fields An unexpected error message appears Radio button inactive Broken or missing image Return button unresponsive Inactive carousel No email notification Scrolling the page is not possible Absence of sound Non-operational geolocation Absence of zoom in and out actions Virtual keyboard not closing automatically Displaying a loading wheel infinitely Being unable to install the application No data refreshing Bluetooth pairing not working Form auto-completion not working The shopping cart appears empty (although it contains items) Sharing on social networks does not work The data entered by the user is deleted after a page change Opening the default alphabetical keyboard instead of the numeric one Blank choice pop-in Redirection to the production version of the site Redirection to a 404 error Being able to create an account even when the necessary conditions are not met (incompatible date of birth...) PDF download impossible Text entry impossible Tutorial absent when starting the application Selection of filters impossible An anchor redirecting to the wrong position Link, missing button
A bug causing, for example, slowdowns that can be measured: slowness at launch, poor management of loading times... or even causing a shutdown of the tested product.
Special cases for application crashes
The criticality of a crash must be as follows:
Examples A site or application is very slow or crashing regularly A random crash of the application Delay between character entry and appearance in the field Page slow to react to scrolling/swiping Page taking more than x seconds to loadFreeze of the applicationJerky video (due to loading)
Wording bugs concern issues relating to the text contained within the tested product. They can affect typography, spelling, grammar, syntax, translation or even differing labels.
Examples Double spaces Typographical errors (missing non-breaking spaces) Semantic error Wrong spelling, grammar or language Different fonts in the same sentence Double punctuation Text not conforming with the reference model Typing errors
On WAT we assign the following three levels of severity to bugs: Blocking, Major, Minor. In the Severity field we added "Suggestion" in the case of feedback rather than a bug.
The bug prevents access to a feature or module; it stops the progress of the tests. This is the highest severity.
Examples The link/button "X" is not present When you click on the "Validate" button, the page reloads Unable to add credit to my account The page does not load (infinite loading) Text entry not possible Scrolling the page is impossible During the validation the shopping cart empties and the order is not confirmed Valid information is not accepted as such The application systematically crashes at point "X" The "X" box to be checked is missing Unable to install the application "X" page is not reachable
The bug is very inconvenient for the user, but it does not block any important functionality, or test progress; it can be bypassed.
Examples The pop-in does not contain any text Page that takes more than X seconds to load Broken or missing image The page is not responsive Filter selection not possible Absence of sound Information cannot be shared on social networks The "X" field is not indicated as mandatory
The bug represents only a small inconvenience for the user, it does not block any functionality, and does not hinder the proper use of the product by the user.
Examples Typing errors A graphic element is blurred/pixelated Pictures in the wrong place, offset Font with different writing in the same sentence Video not playing automatically An anchor redirecting to a wrong position When hovering over the button, it does not change color One graphic element is overlaid on top of another Text or block alignment problem Ability to enter numbers in a text field The information is not pre-filled
Suggestion severity does not exist in test, but for practical reasons, it was decided to create this type of severity in order to be able to record the testers' feedback through an anomaly ticket.
Examples The report is sent without message or subject. An error message indicating to the user that these fields are empty would be welcome The "Play" button downloads another applicatio. You should rename this button because the term "play" implies that you will play in the same application and you may be surprised to see an unwanted download.
This field allows you to give additional information on the frequency of the fault occurring. The tester indicates if it is unique, random, present all the time or if he has not tested it by choosing the corresponding item from the drop-down list.
The tester chooses from the list the device(s) impacted by the bug he is describing. A bug must be linked to at least one or more devices.
Each bug must be justified when possible:
You will need computer tools to be able to properly perform your tasks on Bugtrapp. We remind you that you must not disclose any information or data that passes through Bugtrapp. Thus we ask you not to use any online tool on which you would have to send these data (a screenshot/video for example) for any treatment. You must therefore use exclusively tools on your computer. Here are some examples:
You will (often) have to take screenshots to justify a bug ticket. Remember to frame in color the precise elements of the visual bug.
It may be necessary to take a video rather than a Screenshot to illustrate and explain a bug.
For Android :
For iOS :
Connect your iOS device to your computer.
Sync the device with iTunes, so that all crash-related information is copied to your computer.
Access the following directory to retrieve the log files:
On Mac OS X : ~/Library/Logs/CrashReporter/MobileDevice/
On Windows XP : C:\Documents and Settings\(User_name)\Application Data\Apple computer\Logs\CrashReporter\
On Windows Vista and Windows 7, 8 or 10 : open a runtime window by pressing the Windows key (between CTRL and ALT) and R at the same time, and type the following command: %USERPROFILE%\AppData\Roaming\Apple Computer\Logs\CrashReporter\MobileDevice\
Follow these steps to retrieve your UDID:
The easiest way is to download one app dedicated to retrieve it for you. You can download either Device ID by Redphx or Device ID by Evozi.
Before drafting the test script, the analyst will receive an instruction document from the PM containing the essential information and links to documents (or URLs) needed to start test writing process. It should be noted, however, that this is not systematic and that sometimes the analyst may start drafting test cases solely on the basis of known elements indicated by the PM via Slack.
The documents useful for drafting test cases are grouped together in the TEST SUITE in Attachment.
At the start of a test drafting project, the first day (at least part of it, depending on the number of input documents) is devoted to reading and analyzing the documents provided by the client. After reading the document(s) one or even several times, the analyst will have a better view of the quality of the materials at his disposal and may then be able to anticipate certain points (need for JDD, missing information, etc.) and detect any significant discrepancies between the estimated project time and his or her personal estimate, which he or she will share with the PM as soon as possible.
An essential notion to know when during the writing process is the notion of quality of the documents used, the input documents. These documents determine the quality level of the TEST SUITE. If the quality of the input documents is poor, it is very difficult to be able to provide a superior quality TEST SUITE.
The storyboard is the document that brings together the sequence of pages of a digital product.
It is a general purpose document that aims to understand the navigation within an app or a site. Alone, the storyboard does not allow a test script to be completed. It does however enable a visualization of the product and/or functions in a cinematic nature.
The most important document to make the testing script; the function specification list must note as many functions as possible and describe them as precisely as possible. It is important to review this document with the client to ensure it is valid or accurate in whole or in part.
Generally, this document only partially represents the expected graphic rendering, so it is not preferable to rely on it to define graphic information. If possible, the utilization of a Detailed Functional Specification (DFS) to a General Functional Specification (GFS) is recommended.
While the use of an app’s or website’s wireframes during the writing process is optional, it is strongly recommended.
They can provide a preview of what each page should look like and make it possible to define the “visual” part of the test.
If these wireframes are not provided by the client, it is important to inform the PM that it will not be possible to control for or assess graphic quality during the campaign, and only the opinion of the tester will identify an incorrect graphic display.
In the case of contradictions between the specifications and the wireframes, it is important to define with the PM which document is the reference (the standard) depending on whether it is the function or graphic description of the product.
The use of the wording document as a guide is optional during the writing process, but it is recommended.
It allows you to control the display of static text in the application or on the website. As a general rule, this document is used to indicate the right wording of the digital product.
Sometimes (although this is quite rare), the client may provide, in addition to the other input documents, a "pre-test" version of the application or site to be tested. The analyst has thus the possibility to navigate the application/site and to have a global vision of its organization and functions thanks to this non-static support.
More concretely, this will allow the analyst, among other things, to better understand how the application works (the way the screens are linked together or how the different functionalities are organized) and to better interpret the specifications. As the specifications are not always very clear or complete, the pre-test application sometimes allows to clarify or remove doubt concerning a point lacking precision. This occurs, in particular, when the specifications do not contain a visual, or when the customer has not provided wireframes.
However, although this is a very useful resource or support, especially if the quality of the other input documents provided is not satisfactory, you must remember that this resource alone is not a sufficient for drafting the test script.
Indeed, the version provided is generally not the one that will be used for the test campaign. On the other hand, even if it is the case, it is not a definitive version, in the course of development, and may therefore be subject to constant modifications and most probably contain anomalies. Without supporting documentation detailing the expected behaviors of the application/site, there is no guarantee that any behavior found in the pre-test version is indeed the desired one.
It is therefore essential to have, in addition to the pre-test application, specifications that describe in as much detail as possible the expected behaviors, functions, and content.
A "backlog" is a list of functions or tasks, deemed necessary and sufficient for the satisfactory completion of the project.
In absolute terms, its form can vary: it can be represented by an Excel document, a text file, a database, etc.
What is important is the "atomic" aspect of the elements of a backlog, as opposed to a narrative document in which, for example, a single sentence may contain several distinct requirements, or on the contrary describe a single requirement over several paragraphs.
Not all elements of the backlog are described at the same level of detail at each stage of the project: elements scheduled for completion at a distant date may be whole sections of functionality described in a single sentence, while those scheduled for completion in the near future may be very detailed and accompanied by many detail elements such as tests, interface drawings, etc.
The operation is as follows: we work from a TEST SUITE already written by the client but we transfer the elements into an Excel template compatible with BugTrapp. (To download from a TEST SUITE of BugTrapp)
This method has some constraints: the client TEST SUITE can be modeled on our template but that generally requires a lot of work. According to the expected quality level, transmitted by the PM, the text must be adapted (style, form...). It is not necessarily less time consuming than the writing process, but there are often more steps than necessary: it is therefore necessary to do a lot of sorting.
The wording almost always has to be reviewed and corrected to make the steps understandable to the testers. Indeed, the breakdown of the steps is often not in accordance with the test methods expected on WAT, (the client is not an expert in writing tests) and therefore not optimized for the test process. Example: 1st step of the client’s TEST SUITE can in fact group together several steps that we will choose to separate in our own TEST SUITE, and vice versa.
Reorganization of scenarios is sometimes necessary. To be more logical, as many steps as possible should be grouped together by the same functionality.
Using the documents provided, everything must be created from scratch: steps, Test Cases, scenarios...
The writing of the TEST SUITE consists of using all the documents read previously to provide the most complete description possible of the expected results.
Do not forget that even if the analyst has, at this stage, a clear vision of the project, the tester will be new to the application and the slightest information will be necessary for him to know if the step corresponds or not to the specification (Specification that he has yet to read).
To begin writing the TEST SUITE, it is necessary to have the latest versions of the project’s support documents on screen:
The first thing to do is to transcribe them in the TEST SUITE.
For each scenario (or test case) the following should be entered:
The details of these elements must be clear and concise to enable project stakeholders to quickly find the parties they wish to consult in the event of a review.
Prerequisites should also be listed in the following line; if there are none, simply leave the "Prerequisite:" blank.
These test steps are conceived thanks to different sources:
A step must correspond to a graphical or functional check. This definition implies that the content of a step must be "gauged" in advance and split if it proves to be "too" complex.
The step to be reproduced: corresponds to the action to be performed by the tester.
For example: Tap on the main image / Click on the Pause button / Check the display of the Menu button.
The expected behavior: corresponds to the verification of the supposed state of the component to be tested.
For example: Display of a loading wheel after 5 seconds / The carousel is paused when tapped / Display of an error pop-in with the Close button
Conditional and/or pre-requisite behaviors should be isolated as much as possible in order to make the execution of the scenario as smooth as possible.
Indeed, an interruption of the navigation to change the state of a profile or to generate another condition is an important factor of slippage:
Scenario break -> Exiting the test step -> searching for a dataset -> returning to the step via steps not counted in the TEST SUITE.
For example :
Steps to reproduce | Expected Behavior | Valid ? | Comments |
---|---|---|---|
Verify the layout or display of the homepage | Display of the following elements: - Offer Carousel (changes every 5 seconds: the department promos proceed the promos with a PROMO label) - Navigation menu - List of products with each containing: thumbnail, editorials/promotional/new items, price, crossed-out price label (if applicable), chevron. - Footer - Related links |
NO | This step requires testers to check several elements: - Carousel management rules - Departement carousel management - Product list content - Product list management rules |
Verify the layout or display of the homepage | Display of the following elements : - Carousel - Menu - List - Footer - Links |
NO | Not enough detail on similar elements -> Need to "dose" the information in order to allow the tester to effectively target its execution |
Verify the layout or display of the homepage | Display of the following elements: - Offer carousel - Navigation menu - Product list - Footer - Related links |
YES | This step gives rise to a single check: the presence of each element separately. |
The proper drafting of a test step also facilitates the work of all project stakeholders:
The writing of a test step must meet two specific objectives: clarity and precision.
It must precisely indicate the component and the action to be carried out so that it can be executed by any member of the project: Tester, PM, Analyst or Client.
As such, its drafting must be the object of particular care and avoid any imprecise or too complete form. It must leave no doubt as to the verification to be carried out (no ifs or buts).
Similarly, the wording and writing style must be consistent throughout the test steps to harmonize reading and facilitate comprehension of the steps.
Each step taken independently may be clear but it is the overall TEST SUITE that the quality of the writing is judged.
If a site or application needs to be tested in several languages, it is advisable to offer the customer a language alternation between the terminals so that the same terminal is not tested several times in several languages. (→ Tags, Variables and Perimeters)
When using server operating systems or different supports such as tablets, it is advised to use a TEST SUITE by OS type a and support.
It is not recommended to mix Android and iOS terminals on the same TEST SUITE. The exception can only be made if the application has exactly the same behaviour and appearance, therefore the use of a single TEST SUITE is allowed.
It is necessary to ensure that the few OS specific tests are in different steps and associated with tags. (→ Tags, Variables and Perimeters)
Within the framework of certain test campaigns, it may be decided with the client to test all the steps of the TEST SUITE on a certain number of terminals (full tests) and only part of the steps on the others (optimized tests).
This ensures that the most important parts or functionalities of an application/website are covered on all the selected configurations and at the same time reduces the overall testing time by additionally performing the tests considered "minor" on only part of the configurations. (→ Tags, Variables and Perimeters)
How it works
After having created all the steps to be tested, thus covering the entire perimeter defined in agreement with the client, the analyst proceeds to determine which are the "minor" steps that he will choose to deactivate for the optimized tests.
This choice is based on several elements:
These elements must, in any case, take into account the test scope defined by the PM and the specified directions.
For example, test steps that we will not consider to be major include verifying graphics on a part that has yet to be finished. However, this could change if the PM has specified that the testing effort should be focused on the graphic elements as they are.
In the case of optimized tests, we will avoid testing over and over again test steps that are identical, similar, or repetitive.
Examples :
=> The choice can be made to test only 1 of these cases (e.g. entering an incorrect combination, which is surely the most likely in the case of standard use).
In the case of return verifications steps:
=> Here, on we can decide not to test the returns (to privilege functions) or test them once on a main page.
These examples do not necessarily have to be followed to the letter. The sorting of the optimized test steps remains subjective and will always depend on the project concerned and the instructions and information provided. It will therefore be necessary to proceed on a case-by-case basis.
A good practice to implement is to create a first "template" bug with the expected nomenclature (by the client/the Project Manager) for the tickets on this mission, which will have to be removed at the end of the mission (leave it as submitted so as not to forget it).
When a PM clicks on the "Archive" action button inside a project, a confirmation pop-up warns him/her of what is going to be done (anonymization of all data and therefore loss of some project data). After accepting this Pop-Up, an email is sent to all project members (except WAT testers), the administrator contact and the project creator. The archiving process can be cancelled by clicking on the Unarchive button. After 30 days you will not be able to click on the Unarchive button.
Thus, 30 days after archiving, the following data will be deleted and thus anonymized: