Many vulnerabilities require authenticated scanning for detection. Multiple authentication types are supported - Form, HTTP Basic and Digest. You may want to scan the same web application multiple times with different credentials. To do this, you can add multiple records and provide meaningful titles related to the privilege level like "Anonymous", "User", "Admin". For example a "User" record may find 300 links and 10 vulnerabilities, whereas an "Anonymous" record may find only 100 links and no vulnerabilities.
Internal scanning uses a scanner appliance placed inside your network. Select the scanner appliance you want to use by name from the Scanner Appliance menu in the web application settings. If you don't already have one, contact your Account Manager. Learn more
External scanning is always available using our cloud scanners set up around the globe at our Security Operations Centers (SOCs). For this option, choose External from the Scanner Appliance menu in the web application settings.
Would you like to enable Malware Monitoring? If you enable this feature in the settings of an external web application we'll run daily malware scans on the web application. You can specify the time for these scans and opt in to notification emails.
Tags help you to organize your web applications and other objects in your subscription and to control user access to those objects. By applying a tag to a web application, you grant access to it for users with the same tag in their scopes. You can also use tags to filter the web applications list, create web application reports and more. Go to the CyberSecurity Asset Management (CSAM). application to create and manage tags.
The crawl scope you choose in the web application settings determine where the scan will go. Your options are:
Limit to URL hostname
This means we'll limit crawling to the hostname within the URL, using HTTP or HTTPS and any port. Let's say your starting URL is http://www.test.com.
What links WILL be crawled - All links discovered in www.test.com domain will be crawled. Also note that http://www.test.com/* (* as a wildcard here) will be crawled. All links discovered in http://www.test.com/support and https://www.test.com:8080/logout, etc. will be crawled.
What links WILL NOT be crawled - No links will be followed from sub-domains of www.test.com. This means http://www2.test.com. and/or http://sub1.test.com/ will not be crawled.
Limit to content located at or below URL subdirectory
This means we'll crawl all links starting with a URL subdirectory using HTTP or HTTPS and any port. Let's say your starting URL is http://www.test.com/news.
What links WILL be crawled - All links starting with http://www.test.com/news will be crawled. Also http://www.test.com/news/headlines and https://www.test.com:8080/news/ will be crawled.
What links WILL NOT be crawled - Links like http://www.test.com/agenda and http://www2.test.com will not be crawled.
Limit to URL hostname and specified sub-domain
This means we'll crawl only the URL hostname and one specified sub-domain, using HTTP or HTTPS and any port. Let's say your starting URL is http://www.test.com/news/ and the sub-domain is sub1.test.com.
What links WILL be crawled - All links discovered in www.test.com and in sub1.test.com and any of its sub-domains will be crawled. Also these domains will be crawled: http://www.test.com/support, https://www.test.com:8080/logout, http://sub1.test.com/images/ and http://videos.sub1.test.com.
What links WILL NOT be crawled - Links whose domain does not match the web application URL hostname or is not a sub-domain of sub1.test.com will not be followed. This means http://videos.test.com will not be crawled.
Limit to URL hostname and specified domains
This means we'll crawl only the URL hostname and specified domains, using HTTP or HTTPS and any port. Let's say your starting URL is http://www.test.com/news/ and the specified domains are sub1.test.com and site.test.com.
What links WILL be crawled - All links discovered in www.test.com and in sub1.test.com and all other domains specified will be crawled. This means these domains will be crawled: http://www.test.com/support, https://www.test.com:8080/logout and http://sub1.test.com/images/.
What links WILL NOT be crawled - Links whose domain does not match web application URL hostname or one of the domains specified will not be followed. This means http://videos.test.com and http://videos.sub1.test.com will not be crawled.
Exclusions lists are configurable at a global level (across all web applications in your subscription) as well as customizable for a web application. You can implement customized exclusion lists for your web application and ignore the global settings while creating or editing a web application.
You can use exclusion list to tell us which links to scan and which to ignore for all web applications in your subscription. For a production web application, it's best practice to add pages with certain functionality to the exclude list that if executed would have undesirable results, such as possibly sending out too many emails, potentially submitting a "delete all" button, or disabling/deleting accounts.
Exclusion lists are allow lists, exclude lists, POST data exclude list, and logout regular expression list. Learn more
What if I use an exclude list and an allow list?
If a web application has both an exclude list and an allow list, we treat the allow list entries as exceptions to the exclude list. We will not crawl any exclude list entry unless it matches an allow list entry. We'll crawl all other links including those that match allow list entries.
What if I use only an exclude list?
If a web application has an exclude list only (no allow list), we'll skip all links that match exclude list entries. If the web application has an allow list only (no exclude list), we'll crawl only those links that match allow list entries.
Use Qualys Browser Recorder to create a Selenium script. Qualys Browser Recorder is a free browser extension to record & play back scripts for web application automation testing. Qualys Browser Recorder includes the entire Selenium Core, allowing you to capture web elements and record actions in the browser to let you generate, edit, and play back automated test cases quickly and easily.
You can upload Selenium scripts to your web application settings, and we'll replay these scripts while scanning the web application. For example:
- We can replay recorded steps to scan a web application that requires complex workflows, such as selecting user input combinations that require certain knowledge and/or user interaction.
- We can replay recorded steps, like clicking a series of buttons or filling out forms.
- We can replay recorded steps to complete login and authentication requirements.
Where do I get Qualys Browser Recorder?
You just need to download and install the latest version of the Chrome web browser.
Open the Google Chrome browser and go to Chrome Store. Search for Qualys Browser Recorder in the Chrome store.
How do I create a Selenium script?