Merge pull request #170 from HASecuritySolutions/beta-1.8

VulnWhisperer Release 1.8
This commit is contained in:
Quim Montal
2019-04-17 10:36:35 +02:00
committed by GitHub
59 changed files with 2410 additions and 1392 deletions

View File

@ -11,7 +11,10 @@ assignees: ''
A clear and concise description of what the bug is.
**Affected module**
Which one is the module that is not working as expected, e.g. Nessus, Qualys WAS, Qualys VM, OpenVAS, ELK, Jira...)
Which one is the module that is not working as expected, e.g. Nessus, Qualys WAS, Qualys VM, OpenVAS, ELK, Jira...).
**VulnWhisperer debug trail**
If applicable, paste the VulnWhisperer debug trail of the execution for further detail (execute with '-d' flag).
**To Reproduce**
Steps to reproduce the behavior:
@ -27,8 +30,9 @@ A clear and concise description of what you expected to happen.
If applicable, add screenshots to help explain your problem.
**System in which VulnWhisperer runs (please complete the following information):**
- OS: [e.g. iOS]
- Version [e.g. 22]
- OS: [e.g. Ubuntu Server]
- Version: [e.g. 18.04.2 LTS]
- VulnWhisperer Version: [e.g. 1.7.1]
**Additional context**
Add any other context about the problem here.

2
.gitignore vendored
View File

@ -2,7 +2,9 @@
data/
logs/
elk6/vulnwhisperer.ini
resources/elk6/vulnwhisperer.ini
configs/frameworks_example.ini
tests/data
# Byte-compiled / optimized / DLL files
__pycache__/

6
.gitmodules vendored
View File

@ -1,3 +1,3 @@
[submodule "qualysapi"]
path = deps/qualysapi
url = https://github.com/austin-taylor/qualysapi.git
[submodule "tests/data"]
path = tests/data
url = https://github.com/HASecuritySolutions/VulnWhisperer-tests.git

View File

@ -3,12 +3,22 @@ language: python
cache: pip
python:
- 2.7
env:
- TEST_PATH=tests/data
services:
- docker
# - 3.6
#matrix:
# allow_failures:
# - python: 3.6 - Commenting out testing for Python 3.6 until ready
before_install:
- mkdir -p ./data/esdata1
- mkdir -p ./data/es_snapshots
- sudo chown -R 1000:1000 ./data/es*
- docker build -t vulnwhisperer-local .
- docker-compose -f docker-compose-test.yml up -d
install:
- pip install -r requirements.txt
- pip install flake8 # pytest # add another testing frameworks later
@ -18,7 +28,9 @@ before_script:
# exit-zero treats all errors as warnings. The GitHub editor is 127 chars wide
- flake8 . --count --exit-zero --exclude=deps/qualysapi --max-complexity=10 --max-line-length=127 --statistics
script:
- true # pytest --capture=sys # add other tests here
- python setup.py install
- bash tests/test-vuln_whisperer.sh
- bash tests/test-docker.sh
notifications:
on_success: change
on_failure: change # `always` will be the setting once code changes slow down

View File

@ -2,10 +2,10 @@ FROM centos:latest
MAINTAINER Justin Henderson justin@hasecuritysolutions.com
RUN yum update -y
RUN yum install -y python python-devel git gcc
RUN curl https://bootstrap.pypa.io/get-pip.py -o get-pip.py
RUN python get-pip.py
RUN yum update -y && \
yum install -y python python-devel git gcc && \
curl https://bootstrap.pypa.io/get-pip.py -o get-pip.py && \
python get-pip.py
WORKDIR /opt/VulnWhisperer
@ -13,14 +13,11 @@ COPY requirements.txt requirements.txt
COPY setup.py setup.py
COPY vulnwhisp/ vulnwhisp/
COPY bin/ bin/
COPY deps/ deps/
COPY configs/frameworks_example.ini frameworks_example.ini
RUN python setup.py clean --all
RUN pip install -r requirements.txt
RUN python setup.py clean --all && \
pip install -r requirements.txt
WORKDIR /opt/VulnWhisperer/deps/qualysapi
RUN python setup.py install
WORKDIR /opt/VulnWhisperer
RUN python setup.py install

191
README.md
View File

@ -4,8 +4,7 @@
<p align="center" style="width:400px"><img src="https://github.com/austin-taylor/vulnwhisperer/blob/master/docs/source/vulnWhispererWebApplications.png" style="width:400px"></p>
VulnWhisperer is a vulnerability data and report aggregator. VulnWhisperer will pull all the reports
and create a file with a unique filename which is then fed into logstash. Logstash extracts data from the filename and tags all of the information inside the report (see logstash_vulnwhisp.conf file). Data is then shipped to elasticsearch to be indexed.
VulnWhisperer is a vulnerability management tool and report aggregator. VulnWhisperer will pull all the reports from the different Vulnerability scanners and create a file with a unique filename for each one, using that data later to sync with Jira and feed Logstash. Jira does a closed cycle full Sync with the data provided by the Scanners, while Logstash indexes and tags all of the information inside the report (see logstash files at /resources/elk6/pipeline/). Data is then shipped to ElasticSearch to be indexed, and ends up in a visual and searchable format in Kibana with already defined dashboards.
[![Build Status](https://travis-ci.org/HASecuritySolutions/VulnWhisperer.svg?branch=master)](https://travis-ci.org/HASecuritySolutions/VulnWhisperer)
[![MIT License](https://img.shields.io/badge/license-MIT-blue.svg?style=flat)](http://choosealicense.com/licenses/mit/)
@ -19,7 +18,7 @@ Currently Supports
- [X] [Nessus (**v6**/**v7**/**v8**)](https://www.tenable.com/products/nessus/nessus-professional)
- [X] [Qualys Web Applications](https://www.qualys.com/apps/web-app-scanning/)
- [X] [Qualys Vulnerability Management](https://www.qualys.com/apps/vulnerability-management/)
- [X] [OpenVAS](http://www.openvas.org/)
- [X] [OpenVAS (**v7**/**v8**/**v9**)](http://www.openvas.org/)
- [X] [Tenable.io](https://www.tenable.com/products/tenable-io)
- [ ] [Detectify](https://detectify.com/)
- [ ] [Nexpose](https://www.rapid7.com/products/nexpose/)
@ -39,9 +38,10 @@ Getting Started
===============
1) Follow the [install requirements](#installreq)
2) Fill out the section you want to process in <a href="https://github.com/austin-taylor/VulnWhisperer/blob/master/configs/frameworks_example.ini">example.ini file</a>
3) Modify the IP settings in the <a href="https://github.com/austin-taylor/VulnWhisperer/tree/master/logstash">logstash files to accomodate your environment</a> and import them to your logstash conf directory (default is /etc/logstash/conf.d/)
4) Import the <a href="https://github.com/austin-taylor/VulnWhisperer/tree/master/kibana/vuln_whisp_kibana">kibana visualizations</a>
2) Fill out the section you want to process in <a href="https://github.com/HASecuritySolutions/VulnWhisperer/blob/master/configs/frameworks_example.ini">frameworks_example.ini file</a>
3) [JIRA] If using Jira, fill Jira config in the config file mentioned above.
3) [ELK] Modify the IP settings in the <a href="https://github.com/austin-taylor/VulnWhisperer/tree/master/logstash">Logstash files to accommodate your environment</a> and import them to your logstash conf directory (default is /etc/logstash/conf.d/)
4) [ELK] Import the <a href="https://github.com/austin-taylor/VulnWhisperer/tree/master/kibana/vuln_whisp_kibana">Kibana visualizations</a>
5) [Run Vulnwhisperer](#run)
Need assistance or just want to chat? Join our [slack channel](https://join.slack.com/t/vulnwhisperer/shared_invite/enQtNDQ5MzE4OTIyODU0LWQxZTcxYTY0MWUwYzA4MTlmMWZlYWY2Y2ZmM2EzNDFmNWVlOTM4MzNjYzI0YzdkMDA0YmQyYWRhZGI2NGUxNGI)
@ -49,20 +49,27 @@ Need assistance or just want to chat? Join our [slack channel](https://join.slac
Requirements
-------------
####
* ElasticStack 5.x
* Python 2.7
* Vulnerability Scanner
* Optional: Message broker such as Kafka or RabbitMQ
* Reporting System: Jira / ElasticStack 6.6
<a id="installreq">Install Requirements-VulnWhisperer(may require sudo)</a>
--------------------
**First install requirement dependencies**
**Install OS packages requirement dependencies** (Debian-based distros, CentOS don't need it)
```shell
sudo apt-get install zlib1g-dev libxml2-dev libxslt1-dev
```
**Then install requirements**
**(Optional) Use a python virtualenv to not mess with host python libraries**
```shell
virtualenv venv (will create the python 2.7 virtualenv)
source venv/bin/activate (start the virtualenv, now pip will run there and should install libraries without sudo)
deactivate (for quitting the virtualenv once you are done)
```
**Install python libraries requirements**
```python
pip install -r /path/to/VulnWhisperer/requirements.txt
@ -70,91 +77,14 @@ cd /path/to/VulnWhisperer
python setup.py install
```
**(Optional) If using a proxy, add proxy URL as environment variable to PATH**
```shell
export HTTP_PROXY=http://example.com:8080
export HTTPS_PROXY=http://example.com:8080
```
Now you're ready to pull down scans. (see <a href="#run">run section</a>)
Install Requirements-ELK Node **\*SAMPLE\***
--------------------
The following instructions should be utilized as a **Sample Guide** in the absence of an existing ELK Cluster/Node. This will cover a Debian example install guide of a stand-alone node of Elasticsearch & Kibana.
While Logstash is included in this install guide, it it recommended that a seperate host pulling the VulnWhisperer data is utilized with Logstash to ship the data to the Elasticsearch node.
*Please note there is a docker-compose.yml available as well.*
**Debian:** *(https://www.elastic.co/guide/en/elasticsearch/reference/5.6/deb.html)*
```shell
sudo apt-get install -y default-jre
wget -qO - https://artifacts.elastic.co/GPG-KEY-elasticsearch | sudo apt-key add -
sudo apt-get install apt-transport-https
echo "deb https://artifacts.elastic.co/packages/5.x/apt stable main" | sudo tee -a /etc/apt/sources.list.d/elastic-5.x.list
sudo apt-get update && sudo apt-get install elasticsearch kibana logstash
sudo /bin/systemctl daemon-reload
sudo /bin/systemctl enable elasticsearch.service
sudo /bin/systemctl enable kibana.service
sudo /bin/systemctl enable logstash.service
```
**Elasticsearch & Kibana Sample Config Notes**
Utilizing your favorite text editor:
* Grab your host IP and change the IP of your /etc/elasticsearch/elasticsearch.yml file. (This defaults to 'localhost')
* Validate Elasticsearch is set to run on port 9200 (Default)
* Grab your host IP and change the IP of your /etc/kibana/kibana.yml file. (This defaults to 'localhost') *Validate that Kibana is pointing to the correct Elasticsearch IP (This was set in the previous step)*
* Validate Kibana is set to run on port 5601 (Default)
*Start elasticsearch and validate they are running/communicating with one another:*
```shell
sudo service elasticsearch start
sudo service kibana start
```
OR
```shell
sudo systemctl start elasticsearch.service
sudo systemctl start kibana.service
```
**Logstash Sample Config Notes**
* Copy/Move the Logstash .conf files from */VulnWhisperer/logstash/* to */etc/logstash/conf.d/*
* Validate the Logstash.conf files *input* contains the correct location of VulnWhisper Scans in the *input.file.path* directory identified below:
```
input {
file {
path => "/opt/vulnwhisperer/nessus/**/*"
start_position => "beginning"
tags => "nessus"
type => "nessus"
}
}
```
* Validate the Logstash.conf files *output* contains the correct Elasticsearch IP set during the previous step above: (This will default to localhost)
```
output {
if "nessus" in [tags] or [type] == "nessus" {
#stdout { codec => rubydebug }
elasticsearch {
hosts => [ "localhost:9200" ]
index => "logstash-vulnwhisperer-%{+YYYY.MM}"
}
}
```
* Validate logstash has the correct file permissions to read the location of the VulnWhisperer Scans
Once configured run Logstash: (Running Logstash as a service will pick up all the files in */etc/logstash/conf.d/* If you would like to run only one logstash file please reference the command below):
Logstash as a service:
```shell
sudo service logstash start
```
*OR*
```shell
sudo systemctl start logstash.service
```
Single Logstash file:
```shell
sudo /usr/share/logstash/bin/logstash --path.settings /etc/logstash/ -f /etc/logstash/conf.d/1000_nessus_process_file.conf
```
Configuration
-----
@ -172,70 +102,49 @@ There are a few configuration steps to setting up VulnWhisperer:
-----
To run, fill out the configuration file with your vulnerability scanner settings. Then you can execute from the command line.
```python
vuln_whisperer -c configs/frameworks_example.ini -s nessus
(optional flag: -F -> provides "Fancy" log colouring, good for comprehension when manually executing VulnWhisperer)
vuln_whisperer -c configs/frameworks_example.ini -s nessus
or
vuln_whisperer -c configs/frameworks_example.ini -s qualys
```
If no section is specified (e.g. -s nessus), vulnwhisperer will check on the config file for the modules that have the property enabled=true and run them sequentially.
If no section is specified (e.g. -s nessus), vulnwhisperer will check on the config file for the modules that have the property `enabled=true` and run them sequentially.
<p align="center" style="width:300px"><img src="https://github.com/austin-taylor/vulnwhisperer/blob/master/docs/source/running_vuln_whisperer.png" style="width:400px"></p>
Next you'll need to import the visualizations into Kibana and setup your logstash config. A more thorough README is underway with setup instructions.
Next you'll need to import the visualizations into Kibana and setup your logstash config. You can either follow the sample setup instructions [here](https://github.com/HASecuritySolutions/VulnWhisperer/wiki/Sample-Guide-ELK-Deployment) or go for the `docker-compose` solution we offer.
Docker-compose
-----
The docker-compose file has been tested and running on a Ubuntu 18.04 environment, with docker-ce v.18.06. The structure's purpose is to store locally the data from the scanners, letting vulnwhisperer update the records and Logstash feed them to ElasticSearch, so it requires a local storage folder.
ELK is a whole world by itself, and for newcomers to the platform, it requires basic Linux skills and usually a bit of troubleshooting until it is deployed and working as expected. As we are not able to provide support for each users ELK problems, we put together a docker-compose which includes:
- It will run out of the box if you create on the root directory of VulnWhisperer a folder named "data", which needs permissions for other users to read/write/execute in order to sync:
```shell
mkdir data && chmod -R 666 data #data/database/report_tracker.db will need 777 to use with local vulnwhisperer
```
otherwise the users running inside the docker containers will not be able to work with it properly. If you don't apply chmod recursively, it will still work to sync the data, but only root use in localhost will have access to the created data (if you run local vulnwhisperer with the same data will break).
- docker/logstash.yml file will need other read/write permissions in order for logstash container to use the configuration file; youll need to run:
```shell
chmod 666 docker/logstash.yml
- VulnWhisperer
- Logstash 6.6
- ElasticSearch 6.6
- Kibana 6.6
- The vulnwhisperer container inside of docker-compose is using network_mode=host instead of the bridge mode by default; this is due to issues encountered when the container is trying to pull data from your scanners from a different VLAN than the one you currently are. The host network mode uses the DNS and interface from the host itself, fixing those issues, but it breaks the network isolation from the container (this is due to docker creating bridge interfaces to route the traffic, blocking both container's and host's network). If you change this to bridge, you might need to add your DNS to the config in order to resolve internal hostnames.
- ElasticSearch requires having the value vm.max_map_count with a minimum of 262144; otherwise, it will probably break at launch. Please check https://elk-docker.readthedocs.io/#prerequisites to solve that.
- If you want to change the "data" folder for storing the results, remember to change it from both the docker-compose.yml file and the logstash files that are in the root "docker/" folder.
- Hostnames do NOT allow _ (underscores) on it, if you change the hostname configuration from the docker-compose file and add underscores, config files from logstash will fail.
- If you are having issues with the connection between hosts, to troubleshoot them you can spawn a shell in said host doing the following:
```shell
docker ps #check the container is running
docker exec -i -t vulnwhisp-vulnwhisperer /bin/bash #where vulnwhisp-vulnwhisperer is the container name you want to troubleshoot
```
You can also make sure that all ELK components are working by doing "curl -i host:9200 (elastic)/ host:5601 (kibana) /host:9600 (logstash). WARNING! It is possible that logstash is not exposing to the external network the port but it does to its internal docker network "esnet".
- If Kibana is not showing the results, check that you are searching on the whole ES range, as by default it shows logs for the last 15 minutes (you can choose up to last 5 years)
- X-Pack has been disabled by default due to the noise, plus being a trial version. You can enable it modifying the docker-compose.yml and docker/logstash.conf files. Logstash.conf contains the default credentials for the X-Pack enabled ES.
- On Logstash container, "/usr/share/logstash/pipeline/" is the default path for pipelines and "/usr/share/logstash/config/" for logstash.yml file, instead of "/etc/logstash/conf.d/" and "/etc/logstash/".
- In order to make vulnwhisperer run periodically, and only the vulnwhisperer code, add to crontab the following:
The docker-compose just requires specifying the paths where the VulnWhisperer data will be saved, and where the config files reside. If ran directly after `git clone`, with just adding the Scanner config to the VulnWhisperer config file (/resources/elk6/vulnwhisperer.ini), it will work out of the box.
It also takes care to load the Kibana Dashboards and Visualizations automatically through the API, which needs to be done manually otherwise at Kibana's startup.
```shell
0 8 * * * /usr/bin/docker-compose up vulnwhisp-vulnwhisperer
```
For more info about the docker-compose, check on the [docker-compose wiki](https://github.com/HASecuritySolutions/VulnWhisperer/wiki/docker-compose-Instructions) or the [FAQ](https://github.com/HASecuritySolutions/VulnWhisperer/wiki).
To launch docker-compose, do:
```shell
docker-compose -f docker-compose.yml up
```
Getting Started
===============
Our current Roadmap is as follows:
- [ ] Create a Vulnerability Standard
- [ ] Map every scanner results to the standard
- [ ] Create Scanner module guidelines for easy integration of new scanners (consistency will allow #14)
- [ ] Refactor the code to reuse functions and enable full compatibility among modules
- [ ] Change Nessus CSV to JSON (Consistency and Fix #82)
- [ ] Adapt single Logstash to standard and Kibana Dashboards
- [ ] Implement Detectify Scanner
- [ ] Implement Splunk Reporting/Dashboards
Running Nightly
---------------
If you're running linux, be sure to setup a cronjob to remove old files that get stored in the database. Be sure to change .csv if you're using json.
On top of this, we try to focus on fixing bugs as soon as possible, which might delay the development. We also very welcome PR's, and once we have the new standard implemented, it will be very easy to add compatibility with new scanners.
Setup crontab -e with the following config (modify to your environment) - this will run vulnwhisperer each night at 0130:
`00 1 * * * /usr/bin/find /opt/vulnwhisp/ -type f -name '*.csv' -ctime +3 -exec rm {} \;`
`30 1 * * * /usr/local/bin/vuln_whisperer -c /opt/vulnwhisp/configs/example.ini`
Another option is to tell logstash to delete files after they have been processed.
_For windows, you may need to type the full path of the binary in vulnWhisperer located in the bin directory._
The Vulnerability Standard will initially be a new simple one level JSON with all the information that matches from the different scanners having standardized variable names, while maintaining the rest of the variables as they are. In the future, once everything is implemented, we will evaluate moving to an existing standard like ECS or AWS Vulnerability Schema; we prioritize functionality over perfection.
Video Walkthrough -- Featured on ElasticWebinar
----------------------------------------------
@ -250,9 +159,9 @@ Authors
Contributors
------------
- [@pemontto](https://github.com/pemontto)
- [Quim Montal (@qmontal)](https://github.com/qmontal)
- [Andrea Lusuardi (@uovobw)](https://github.com/uovobw)
- [@pemontto](https://github.com/pemontto)
- [@cybergoof](https://github.com/cybergoof)
AS SEEN ON TV
-------------

View File

@ -5,17 +5,20 @@ __author__ = 'Austin Taylor'
from vulnwhisp.vulnwhisp import vulnWhisperer
from vulnwhisp.base.config import vwConfig
from vulnwhisp.test.mock import mockAPI
import os
import argparse
import sys
import logging
def isFileValid(parser, arg):
if not os.path.exists(arg):
parser.error("The file %s does not exist!" % arg)
else:
return arg
def main():
parser = argparse.ArgumentParser(description=""" VulnWhisperer is designed to create actionable data from\
@ -30,46 +33,64 @@ def main():
help='JIRA required only! Scan name from scan to report')
parser.add_argument('-v', '--verbose', dest='verbose', action='store_true', default=True,
help='Prints status out to screen (defaults to True)')
parser.add_argument('-u', '--username', dest='username', required=False, default=None, type=lambda x: x.strip(), help='The NESSUS username')
parser.add_argument('-p', '--password', dest='password', required=False, default=None, type=lambda x: x.strip(), help='The NESSUS password')
parser.add_argument('-F', '--fancy', action='store_true', help='Enable colourful logging output')
parser.add_argument('-d', '--debug', action='store_true', help='Enable debugging messages')
parser.add_argument('-u', '--username', dest='username', required=False, default=None,
help='The NESSUS username', type=lambda x: x.strip())
parser.add_argument('-p', '--password', dest='password', required=False, default=None,
help='The NESSUS password', type=lambda x: x.strip())
parser.add_argument('-F', '--fancy', action='store_true',
help='Enable colourful logging output')
parser.add_argument('-d', '--debug', action='store_true',
help='Enable debugging messages')
parser.add_argument('--mock', action='store_true',
help='Enable mocked API responses')
parser.add_argument('--mock_dir', dest='mock_dir', required=False, default=None,
help='Path of test directory')
args = parser.parse_args()
# First setup logging
logging.basicConfig(
stream=sys.stdout,
#format only applies when not using -F flag for colouring
format='%(levelname)s:%(name)s:%(funcName)s:%(message)s',
level=logging.DEBUG if args.debug else logging.INFO
)
logger = logging.getLogger(name='main')
logger = logging.getLogger()
# we set up the logger to log as well to file
fh = logging.FileHandler('vulnwhisperer.log')
fh.setLevel(logging.DEBUG if args.debug else logging.INFO)
fh.setFormatter(logging.Formatter("%(asctime)s %(levelname)s %(name)s - %(funcName)s:%(message)s", "%Y-%m-%d %H:%M:%S"))
logger.addHandler(fh)
if args.fancy:
import coloredlogs
coloredlogs.install(level='DEBUG' if args.debug else 'INFO')
if args.mock:
mock_api = mockAPI(args.mock_dir, args.verbose)
mock_api.mock_endpoints()
exit_code = 0
try:
if args.config and not args.section:
# this remains a print since we are in the main binary
print('WARNING: {warning}'.format(warning='No section was specified, vulnwhisperer will scrape enabled modules from config file. \
\nPlease specify a section using -s. \
\nPlease specify a section using -s. \
\nExample vuln_whisperer -c config.ini -s nessus'))
logger.info('No section was specified, vulnwhisperer will scrape enabled modules from the config file.')
config = vwConfig(config_in=args.config)
enabled_sections = config.get_sections_with_attribute('enabled')
for section in enabled_sections:
vw = vulnWhisperer(config=args.config,
profile=section,
verbose=args.verbose,
username=args.username,
password=args.password,
source=args.source,
scanname=args.scanname)
vw.whisper_vulnerabilities()
# TODO: fix this to NOT be exit 1 unless in error
sys.exit(1)
config = vwConfig(config_in=args.config)
enabled_sections = config.get_sections_with_attribute('enabled')
for section in enabled_sections:
vw = vulnWhisperer(config=args.config,
profile=section,
verbose=args.verbose,
username=args.username,
password=args.password,
source=args.source,
scanname=args.scanname)
exit_code += vw.whisper_vulnerabilities()
else:
logger.info('Running vulnwhisperer for section {}'.format(args.section))
vw = vulnWhisperer(config=args.config,
@ -79,10 +100,10 @@ def main():
password=args.password,
source=args.source,
scanname=args.scanname)
exit_code += vw.whisper_vulnerabilities()
vw.whisper_vulnerabilities()
# TODO: fix this to NOT be exit 1 unless in error
sys.exit(1)
close_logging_handlers(logger)
sys.exit(exit_code)
except Exception as e:
if args.verbose:
@ -90,8 +111,15 @@ def main():
logger.error('{}'.format(str(e)))
print('ERROR: {error}'.format(error=e))
# TODO: fix this to NOT be exit 2 unless in error
close_logging_handlers(logger)
sys.exit(2)
close_logging_handlers(logger)
def close_logging_handlers(logger):
for handler in logger.handlers:
handler.close()
logger.removeFilter(handler)
if __name__ == '__main__':
main()

View File

@ -26,7 +26,7 @@ enabled = true
hostname = qualysapi.qg2.apps.qualys.com
username = exampleuser
password = examplepass
write_path=/opt/VulnWhisperer/data/qualys/
write_path=/opt/VulnWhisperer/data/qualys_web/
db_path=/opt/VulnWhisperer/data/database
verbose=true
@ -42,16 +42,10 @@ enabled = true
hostname = qualysapi.qg2.apps.qualys.com
username = exampleuser
password = examplepass
write_path=/opt/VulnWhisperer/data/qualys/
write_path=/opt/VulnWhisperer/data/qualys_vuln/
db_path=/opt/VulnWhisperer/data/database
verbose=true
# Set the maximum number of retries each connection should attempt.
#Note, this applies only to failed connections and timeouts, never to requests where the server returns a response.
max_retries = 10
# Template ID will need to be retrieved for each document. Please follow the reference guide above for instructions on how to get your template ID.
template_id = 126024
[detectify]
#Reference https://developer.detectify.com/
enabled = false
@ -74,27 +68,15 @@ write_path=/opt/VulnWhisperer/data/openvas/
db_path=/opt/VulnWhisperer/data/database
verbose=true
#[proxy]
; This section is optional. Leave it out if you're not using a proxy.
; You can use environmental variables as well: http://www.python-requests.org/en/latest/user/advanced/#proxies
; proxy_protocol set to https, if not specified.
#proxy_url = proxy.mycorp.com
; proxy_port will override any port specified in proxy_url
#proxy_port = 8080
; proxy authentication
#proxy_username = proxyuser
#proxy_password = proxypass
[jira]
enabled = false
hostname = jira-host
username = username
password = password
write_path = /opt/VulnWhisperer/data/jira/
db_path = /opt/VulnWhisperer/data/database
verbose = true
dns_resolv = False
#Sample jira report scan, will automatically be created for existent scans
#[jira.qualys_vuln.test_scan]

90
configs/test.ini Executable file
View File

@ -0,0 +1,90 @@
[nessus]
enabled=true
hostname=nessus
port=443
username=nessus_username
password=nessus_password
write_path=/opt/VulnWhisperer/data/nessus/
db_path=/opt/VulnWhisperer/data/database
trash=false
verbose=true
[tenable]
enabled=true
hostname=tenable
port=443
username=tenable.io_username
password=tenable.io_password
write_path=/opt/VulnWhisperer/data/tenable/
db_path=/opt/VulnWhisperer/data/database
trash=false
verbose=true
[qualys_web]
#Reference https://www.qualys.com/docs/qualys-was-api-user-guide.pdf to find your API
enabled = false
hostname = qualys_web
username = exampleuser
password = examplepass
write_path=/opt/VulnWhisperer/data/qualys_web/
db_path=/opt/VulnWhisperer/data/database
verbose=true
# Set the maximum number of retries each connection should attempt.
#Note, this applies only to failed connections and timeouts, never to requests where the server returns a response.
max_retries = 10
# Template ID will need to be retrieved for each document. Please follow the reference guide above for instructions on how to get your template ID.
template_id = 126024
[qualys_vuln]
#Reference https://www.qualys.com/docs/qualys-was-api-user-guide.pdf to find your API
enabled = true
hostname = qualys_vuln
username = exampleuser
password = examplepass
write_path=/opt/VulnWhisperer/data/qualys_vuln/
db_path=/opt/VulnWhisperer/data/database
verbose=true
[detectify]
#Reference https://developer.detectify.com/
enabled = false
hostname = detectify
#username variable used as apiKey
username = exampleuser
#password variable used as secretKey
password = examplepass
write_path =/opt/VulnWhisperer/data/detectify/
db_path = /opt/VulnWhisperer/data/database
verbose = true
[openvas]
enabled = false
hostname = openvas
port = 4000
username = exampleuser
password = examplepass
write_path=/opt/VulnWhisperer/data/openvas/
db_path=/opt/VulnWhisperer/data/database
verbose=true
[jira]
enabled = false
hostname = jira-host
username = username
password = password
write_path = /opt/VulnWhisperer/data/jira/
db_path = /opt/VulnWhisperer/data/database
verbose = true
dns_resolv = False
#Sample jira report scan, will automatically be created for existent scans
#[jira.qualys_vuln.test_scan]
#source = qualys_vuln
#scan_name = Test Scan
#jira_project = PROJECT
; if multiple components, separate by "," = None
#components =
; minimum criticality to report (low, medium, high or critical) = None
#min_critical_to_report = high

1
deps/qualysapi vendored

Submodule deps/qualysapi deleted from 42c3b43ac1

97
docker-compose-test.yml Normal file
View File

@ -0,0 +1,97 @@
version: '2'
services:
elasticsearch:
image: docker.elastic.co/elasticsearch/elasticsearch:6.6.0
container_name: elasticsearch
environment:
- cluster.name=vulnwhisperer
- bootstrap.memory_lock=true
- "ES_JAVA_OPTS=-Xms1g -Xmx1g"
- xpack.security.enabled=false
- cluster.routing.allocation.disk.threshold_enabled=false
ulimits:
memlock:
soft: -1
hard: -1
nofile:
soft: 65536
hard: 65536
mem_limit: 8g
volumes:
- ./data/esdata1:/usr/share/elasticsearch/data
- ./data/es_snapshots:/snapshots
ports:
- 9200:9200
#restart: always
networks:
esnet:
aliases:
- elasticsearch.local
kibana:
image: docker.elastic.co/kibana/kibana:6.6.0
container_name: kibana
environment:
SERVER_NAME: kibana
ELASTICSEARCH_URL: http://elasticsearch:9200
ports:
- 5601:5601
depends_on:
- elasticsearch
networks:
esnet:
aliases:
- kibana.local
kibana-config:
image: alpine
container_name: kibana-config
volumes:
- ./resources/elk6/init_kibana.sh:/opt/init_kibana.sh
- ./resources/elk6/kibana_APIonly.json:/opt/kibana_APIonly.json
- ./resources/elk6/logstash-vulnwhisperer-template.json:/opt/index-template.json
command: sh -c "apk add --no-cache curl bash && chmod +x /opt/init_kibana.sh && chmod +r /opt/kibana_APIonly.json && cd /opt/ && /bin/bash /opt/init_kibana.sh" # /opt/kibana_APIonly.json"
networks:
esnet:
aliases:
- kibana-config.local
logstash:
image: docker.elastic.co/logstash/logstash:6.6.0
container_name: logstash
volumes:
- ./resources/elk6/pipeline/:/usr/share/logstash/pipeline
- ./data/vulnwhisperer/:/opt/VulnWhisperer/data
# - ./resources/elk6/logstash.yml:/usr/share/logstash/config/logstash.yml
environment:
- xpack.monitoring.enabled=false
depends_on:
- elasticsearch
ports:
- 9600:9600
networks:
esnet:
aliases:
- logstash.local
vulnwhisperer:
# image: hasecuritysolutions/vulnwhisperer:latest
image: vulnwhisperer-local
container_name: vulnwhisperer
entrypoint: [
"vuln_whisperer",
"-F",
"-c",
"/opt/VulnWhisperer/vulnwhisperer.ini",
"--mock",
"--mock_dir",
"/tests/data"
]
volumes:
- ./data/vulnwhisperer/:/opt/VulnWhisperer/data
# - ./resources/elk6/vulnwhisperer.ini:/opt/VulnWhisperer/vulnwhisperer.ini
- ./configs/test.ini:/opt/VulnWhisperer/vulnwhisperer.ini
- ./tests/data/:/tests/data
network_mode: host
networks:
esnet:

View File

@ -6,9 +6,8 @@ services:
environment:
- cluster.name=vulnwhisperer
- bootstrap.memory_lock=true
- "ES_JAVA_OPTS=-Xms512m -Xmx512m"
- "ES_JAVA_OPTS=-Xms1g -Xmx1g"
- xpack.security.enabled=false
ulimits:
memlock:
soft: -1
@ -21,7 +20,7 @@ services:
- esdata1:/usr/share/elasticsearch/data
ports:
- 9200:9200
restart: always
#restart: always
networks:
esnet:
aliases:
@ -40,13 +39,25 @@ services:
esnet:
aliases:
- kibana.local
kibana-config:
image: alpine
container_name: kibana-config
volumes:
- ./resources/elk6/init_kibana.sh:/opt/init_kibana.sh
- ./resources/elk6/kibana_APIonly.json:/opt/kibana_APIonly.json
- ./resources/elk6/logstash-vulnwhisperer-template.json:/opt/index-template.json
command: sh -c "apk add --no-cache curl bash && chmod +x /opt/init_kibana.sh && chmod +r /opt/kibana_APIonly.json && cd /opt/ && /bin/bash /opt/init_kibana.sh" # /opt/kibana_APIonly.json"
networks:
esnet:
aliases:
- kibana-config.local
logstash:
image: docker.elastic.co/logstash/logstash:6.6.0
container_name: logstash
volumes:
- ./elk6/pipeline/:/usr/share/logstash/pipeline
#- ./elk6/logstash.yml:/usr/share/logstash/config/logstash.yml
- ./data/:/opt/vulnwhisperer/data
- ./resources/elk6/pipeline/:/usr/share/logstash/pipeline
- ./data/:/opt/VulnWhisperer/data
#- ./resources/elk6/logstash.yml:/usr/share/logstash/config/logstash.yml
environment:
- xpack.monitoring.enabled=false
depends_on:
@ -61,11 +72,11 @@ services:
entrypoint: [
"vuln_whisperer",
"-c",
"/opt/vulnwhisperer/vulnwhisperer.ini"
"/opt/VulnWhisperer/vulnwhisperer.ini"
]
volumes:
- ./data/:/opt/vulnwhisperer/data
- ./elk6/vulnwhisperer.ini:/opt/vulnwhisperer/vulnwhisperer.ini
- ./data/:/opt/VulnWhisperer/data
- ./resources/elk6/vulnwhisperer.ini:/opt/VulnWhisperer/vulnwhisperer.ini
network_mode: host
volumes:
esdata1:

Binary file not shown.

After

Width:  |  Height:  |  Size: 449 KiB

File diff suppressed because one or more lines are too long

View File

@ -8,3 +8,5 @@ bs4
jira
bottle
coloredlogs
qualysapi>=5.1.0
httpretty

View File

@ -21,7 +21,7 @@ services:
- 9200:9200
environment:
- xpack.security.enabled=false
restart: always
#restart: always
networks:
esnet:
aliases:

View File

@ -53,7 +53,7 @@
],
"properties": {
"plugin_id": {
"type": "integer"
"type": "float"
},
"last_updated": {
"type": "date"

View File

@ -7,13 +7,13 @@
input {
file {
path => "/opt/vulnwhisperer/nessus/**/*"
path => "/opt/VulnWhisperer/nessus/**/*"
start_position => "beginning"
tags => "nessus"
type => "nessus"
}
file {
path => "/opt/vulnwhisperer/tenable/*.csv"
path => "/opt/VulnWhisperer/tenable/*.csv"
start_position => "beginning"
tags => "tenable"
type => "tenable"
@ -27,7 +27,7 @@ filter {
csv {
# columns => ["plugin_id", "cve", "cvss", "risk", "asset", "protocol", "port", "plugin_name", "synopsis", "description", "solution", "see_also", "plugin_output"]
columns => ["plugin_id", "cve", "cvss", "risk", "asset", "protocol", "port", "plugin_name", "synopsis", "description", "solution", "see_also", "plugin_output", "asset_uuid", "vulnerability_state", "ip", "fqdn", "netbios", "operating_system", "mac_address", "plugin_family", "cvss_base", "cvss_temporal", "cvss_temporal_vector", "cvss_vector", "cvss3_base", "cvss3_temporal", "cvss3_temporal_vector", "cvss_vector", "system_type", "host_start", "host_end"]
columns => ["plugin_id", "cve", "cvss", "risk", "asset", "protocol", "port", "plugin_name", "synopsis", "description", "solution", "see_also", "plugin_output", "asset_uuid", "vulnerability_state", "ip", "fqdn", "netbios", "operating_system", "mac_address", "plugin_family", "cvss_base", "cvss_temporal", "cvss_temporal_vector", "cvss_vector", "cvss3_base", "cvss3_temporal", "cvss3_temporal_vector", "cvss3_vector", "system_type", "host_start", "host_end"]
separator => ","
source => "message"
}

View File

@ -6,7 +6,7 @@
input {
file {
path => "/opt/vulnwhisperer/qualys/*.json"
path => [ "/opt/VulnWhisperer/data/qualys/*.json" , "/opt/VulnWhisperer/data/qualys_web/*.json", "/opt/VulnWhisperer/data/qualys_vuln/*.json" ]
type => json
codec => json
start_position => "beginning"

View File

@ -6,7 +6,7 @@
input {
file {
path => "/opt/vulnwhisperer/openvas/*.json"
path => "/opt/VulnWhisperer/openvas/*.json"
type => json
codec => json
start_position => "beginning"

View File

@ -2,7 +2,7 @@
input {
file {
path => "/opt/vulnwhisperer/jira/*.json"
path => "/opt/VulnWhisperer/jira/*.json"
type => json
codec => json
start_position => "beginning"

52
resources/elk6/init_kibana.sh Executable file
View File

@ -0,0 +1,52 @@
#!/bin/bash
#kibana_url="localhost:5601"
kibana_url="kibana.local:5601"
elasticsearch_url="elasticsearch.local:9200"
add_saved_objects="curl -s -u elastic:changeme -k -XPOST 'http://"$kibana_url"/api/saved_objects/_bulk_create' -H 'Content-Type: application/json' -H \"kbn-xsrf: true\" -d @"
#Create all saved objects - including index pattern
saved_objects_file="kibana_APIonly.json"
#if [ `curl -I localhost:5601/status | head -n1 |cut -d$' ' -f2` -eq '200' ]; then echo "Loading VulnWhisperer Saved Objects"; eval $(echo $add_saved_objects$saved_objects_file); else echo "waiting for kibana"; fi
until curl -s "$elasticsearch_url/_cluster/health?pretty" | grep '"status"' | grep -qE "green|yellow"; do
curl -s "$elasticsearch_url/_cluster/health?pretty"
echo "Waiting for Elasticsearch..."
sleep 5
done
count=0
until curl -s --fail -XPUT "http://$elasticsearch_url/_template/vulnwhisperer" -H 'Content-Type: application/json' -d '@/opt/index-template.json'; do
echo "Loading VulnWhisperer index template..."
((count++)) && ((count==60)) && break
sleep 1
done
if [[ count -le 60 && $(curl -s -I http://$elasticsearch_url/_template/vulnwhisperer | head -n1 |cut -d$' ' -f2) == "200" ]]; then
echo -e "\n✅ VulnWhisperer index template loaded"
else
echo -e "\n❌ VulnWhisperer index template failed to load"
fi
until [ "`curl -s -I "$kibana_url"/status | head -n1 |cut -d$' ' -f2`" == "200" ]; do
curl -s -I "$kibana_url"/status
echo "Waiting for Kibana..."
sleep 5
done
echo "Loading VulnWhisperer Saved Objects"
echo $add_saved_objects$saved_objects_file
eval $(echo $add_saved_objects$saved_objects_file)
#set "*" as default index
#id_default_index="87f3bcc0-8b37-11e8-83be-afaed4786d8c"
#os.system("curl -X POST -H \"Content-Type: application/json\" -H \"kbn-xsrf: true\" -d '{\"value\":\""+id_default_index+"\"}' http://elastic:changeme@"+kibana_url+"kibana/settings/defaultIndex")
#Create vulnwhisperer index pattern
#index_name = "logstash-vulnwhisperer-*"
#os.system(add_index+index_name+"' '-d{\"attributes\":{\"title\":\""+index_name+"\",\"timeFieldName\":\"@timestamp\"}}'")
#Create jira index pattern, separated for not fill of crap variables the Discover tab by default
#index_name = "logstash-jira-*"
#os.system(add_index+index_name+"' '-d{\"attributes\":{\"title\":\""+index_name+"\",\"timeFieldName\":\"@timestamp\"}}'")

433
resources/elk6/kibana.json Normal file

File diff suppressed because one or more lines are too long

File diff suppressed because one or more lines are too long

View File

@ -0,0 +1,233 @@
{
"index_patterns": "logstash-vulnwhisperer-*",
"mappings": {
"doc": {
"properties": {
"@timestamp": {
"type": "date"
},
"@version": {
"type": "keyword"
},
"asset": {
"type": "text",
"norms": false,
"fields": {
"keyword": {
"type": "keyword",
"ignore_above": 256
}
}
},
"asset_uuid": {
"type": "keyword"
},
"assign_ip": {
"type": "ip"
},
"category": {
"type": "keyword"
},
"cve": {
"type": "keyword"
},
"cvss_base": {
"type": "float"
},
"cvss_temporal_vector": {
"type": "keyword"
},
"cvss_temporal": {
"type": "float"
},
"cvss_vector": {
"type": "keyword"
},
"cvss": {
"type": "float"
},
"cvss3_base": {
"type": "float"
},
"cvss3_temporal_vector": {
"type": "keyword"
},
"cvss3_temporal": {
"type": "float"
},
"cvss3_vector": {
"type": "keyword"
},
"cvss3": {
"type": "float"
},
"description": {
"fields": {
"keyword": {
"ignore_above": 256,
"type": "keyword"
}
},
"norms": false,
"type": "text"
},
"dns": {
"type": "keyword"
},
"exploitability": {
"fields": {
"keyword": {
"ignore_above": 256,
"type": "keyword"
}
},
"norms": false,
"type": "text"
},
"fqdn": {
"type": "keyword"
},
"geoip": {
"dynamic": true,
"type": "object",
"properties": {
"ip": {
"type": "ip"
},
"latitude": {
"type": "float"
},
"location": {
"type": "geo_point"
},
"longitude": {
"type": "float"
}
}
},
"history_id": {
"type": "keyword"
},
"host": {
"type": "keyword"
},
"host_end": {
"type": "date"
},
"host_start": {
"type": "date"
},
"impact": {
"fields": {
"keyword": {
"ignore_above": 256,
"type": "keyword"
}
},
"norms": false,
"type": "text"
},
"ip_status": {
"type": "keyword"
},
"ip": {
"type": "ip"
},
"last_updated": {
"type": "date"
},
"operating_system": {
"type": "keyword"
},
"path": {
"type": "keyword"
},
"pci_vuln": {
"type": "keyword"
},
"plugin_family": {
"type": "keyword"
},
"plugin_id": {
"type": "keyword"
},
"plugin_name": {
"type": "keyword"
},
"plugin_output": {
"fields": {
"keyword": {
"ignore_above": 256,
"type": "keyword"
}
},
"norms": false,
"type": "text"
},
"port": {
"type": "integer"
},
"protocol": {
"type": "keyword"
},
"results": {
"type": "text"
},
"risk_number": {
"type": "integer"
},
"risk_score_name": {
"type": "keyword"
},
"risk_score": {
"type": "float"
},
"risk": {
"type": "keyword"
},
"scan_id": {
"type": "keyword"
},
"scan_name": {
"type": "keyword"
},
"scan_reference": {
"type": "keyword"
},
"see_also": {
"type": "keyword"
},
"solution": {
"type": "keyword"
},
"source": {
"type": "keyword"
},
"ssl": {
"type": "keyword"
},
"synopsis": {
"type": "keyword"
},
"system_type": {
"type": "keyword"
},
"tags": {
"type": "keyword"
},
"threat": {
"type": "text"
},
"type": {
"type": "keyword"
},
"vendor_reference": {
"type": "keyword"
},
"vulnerability_state": {
"type": "keyword"
}
}
}
}
}

View File

@ -7,14 +7,14 @@
input {
file {
path => "/opt/vulnwhisperer/data/nessus/**/*"
path => "/opt/VulnWhisperer/data/nessus/**/*"
mode => "read"
start_position => "beginning"
file_completed_action => "delete"
tags => "nessus"
}
file {
path => "/opt/vulnwhisperer/data/tenable/*.csv"
path => "/opt/VulnWhisperer/data/tenable/*.csv"
mode => "read"
start_position => "beginning"
file_completed_action => "delete"
@ -29,7 +29,7 @@ filter {
csv {
# columns => ["plugin_id", "cve", "cvss", "risk", "asset", "protocol", "port", "plugin_name", "synopsis", "description", "solution", "see_also", "plugin_output"]
columns => ["plugin_id", "cve", "cvss", "risk", "asset", "protocol", "port", "plugin_name", "synopsis", "description", "solution", "see_also", "plugin_output", "asset_uuid", "vulnerability_state", "ip", "fqdn", "netbios", "operating_system", "mac_address", "plugin_family", "cvss_base", "cvss_temporal", "cvss_temporal_vector", "cvss_vector", "cvss3_base", "cvss3_temporal", "cvss3_temporal_vector", "cvss_vector", "system_type", "host_start", "host_end"]
columns => ["plugin_id", "cve", "cvss", "risk", "asset", "protocol", "port", "plugin_name", "synopsis", "description", "solution", "see_also", "plugin_output", "asset_uuid", "vulnerability_state", "ip", "fqdn", "netbios", "operating_system", "mac_address", "plugin_family", "cvss_base", "cvss_temporal", "cvss_temporal_vector", "cvss_vector", "cvss3_base", "cvss3_temporal", "cvss3_temporal_vector", "cvss3_vector", "system_type", "host_start", "host_end"]
separator => ","
source => "message"
}
@ -53,11 +53,13 @@ filter {
}
#If using filebeats as your source, you will need to replace the "path" field to "source"
# Remove when scan name is included in event (current method is error prone)
grok {
match => { "path" => "(?<scan_name>[a-zA-Z0-9_.\-]+)_%{INT:scan_id}_%{INT:history_id}_%{INT:last_updated}.csv$" }
tag_on_failure => []
}
# TODO remove when @timestamp is included in event
date {
match => [ "last_updated", "UNIX" ]
target => "@timestamp"
@ -169,6 +171,9 @@ filter {
output {
if "nessus" in [tags] or "tenable" in [tags]{
stdout {
codec => dots
}
elasticsearch {
hosts => [ "elasticsearch:9200" ]
index => "logstash-vulnwhisperer-%{+YYYY.MM}"

View File

@ -6,7 +6,7 @@
input {
file {
path => "/opt/vulnwhisperer/data/qualys/*.json"
path => [ "/opt/VulnWhisperer/data/qualys/*.json" , "/opt/VulnWhisperer/data/qualys_web/*.json", "/opt/VulnWhisperer/data/qualys_vuln/*.json"]
type => json
codec => json
start_position => "beginning"
@ -14,7 +14,6 @@ input {
mode => "read"
start_position => "beginning"
file_completed_action => "delete"
}
}
@ -99,6 +98,8 @@ filter {
target => "last_time_tested"
}
}
# TODO remove when @timestamp is included in event
date {
match => [ "last_updated", "UNIX" ]
target => "@timestamp"
@ -148,6 +149,9 @@ filter {
}
output {
if "qualys" in [tags] {
stdout {
codec => dots
}
elasticsearch {
hosts => [ "elasticsearch:9200" ]
index => "logstash-vulnwhisperer-%{+YYYY.MM}"

View File

@ -6,7 +6,7 @@
input {
file {
path => "/opt/vulnwhisperer/data/openvas/*.json"
path => "/opt/VulnWhisperer/data/openvas/*.json"
type => json
codec => json
start_position => "beginning"
@ -92,6 +92,8 @@ filter {
target => "last_time_tested"
}
}
# TODO remove when @timestamp is included in event
date {
match => [ "last_updated", "UNIX" ]
target => "@timestamp"
@ -141,6 +143,9 @@ filter {
}
output {
if "openvas" in [tags] {
stdout {
codec => dots
}
elasticsearch {
hosts => [ "elasticsearch:9200" ]
index => "logstash-vulnwhisperer-%{+YYYY.MM}"

View File

@ -2,7 +2,7 @@
input {
file {
path => "/opt/vulnwhisperer/data/jira/*.json"
path => "/opt/VulnWhisperer/data/jira/*.json"
type => json
codec => json
start_position => "beginning"

View File

@ -4,8 +4,8 @@ hostname=localhost
port=8834
username=nessus_username
password=nessus_password
write_path=/opt/vulnwhisperer/data/nessus/
db_path=/opt/vulnwhisperer/database
write_path=/opt/VulnWhisperer/data/nessus/
db_path=/opt/VulnWhisperer/database
trash=false
verbose=true
@ -15,7 +15,7 @@ hostname=cloud.tenable.com
port=443
username=tenable.io_username
password=tenable.io_password
write_path=/opt/vulnwhisperer/data/tenable/
write_path=/opt/VulnWhisperer/data/tenable/
db_path=/opt/VulnWhisperer/data/database
trash=false
verbose=true
@ -26,8 +26,8 @@ enabled = true
hostname = qualysapi.qg2.apps.qualys.com
username = exampleuser
password = examplepass
write_path=/opt/vulnwhisperer/data/qualys/
db_path=/opt/vulnwhisperer/data/database
write_path=/opt/VulnWhisperer/data/qualys/
db_path=/opt/VulnWhisperer/data/database
verbose=true
# Set the maximum number of retries each connection should attempt.
@ -42,8 +42,8 @@ enabled = true
hostname = qualysapi.qg2.apps.qualys.com
username = exampleuser
password = examplepass
write_path=/opt/vulnwhisperer/data/qualys/
db_path=/opt/vulnwhisperer/data/database
write_path=/opt/VulnWhisperer/data/qualys/
db_path=/opt/VulnWhisperer/data/database
verbose=true
# Set the maximum number of retries each connection should attempt.
@ -60,8 +60,8 @@ hostname = api.detectify.com
username = exampleuser
#password variable used as secretKey
password = examplepass
write_path =/opt/vulnwhisperer/data/detectify/
db_path = /opt/vulnwhisperer/data/database
write_path =/opt/VulnWhisperer/data/detectify/
db_path = /opt/VulnWhisperer/data/database
verbose = true
[openvas]
@ -70,8 +70,8 @@ hostname = localhost
port = 4000
username = exampleuser
password = examplepass
write_path=/opt/vulnwhisperer/data/openvas/
db_path=/opt/vulnwhisperer/data/database
write_path=/opt/VulnWhisperer/data/openvas/
db_path=/opt/VulnWhisperer/data/database
verbose=true
#[proxy]
@ -92,9 +92,10 @@ verbose=true
hostname = jira-host
username = username
password = password
write_path = /opt/vulnwhisperer/data/jira/
db_path = /opt/vulnwhisperer/data/database
write_path = /opt/VulnWhisperer/data/jira/
db_path = /opt/VulnWhisperer/data/database
verbose = true
dns_resolv = False
#Sample jira report scan, will automatically be created for existent scans
#[jira.qualys_vuln.test_scan]

View File

@ -4,7 +4,7 @@ from setuptools import setup, find_packages
setup(
name='VulnWhisperer',
version='1.7.1',
version='1.8',
packages=find_packages(),
url='https://github.com/austin-taylor/vulnwhisperer',
license="""MIT License

1
tests/data Submodule

Submodule tests/data added at 55dc6832f8

109
tests/test-docker.sh Executable file
View File

@ -0,0 +1,109 @@
#!/usr/bin/env bash
NORMAL=$(tput sgr0)
GREEN=$(tput setaf 2)
YELLOW=$(tput setaf 3)
RED=$(tput setaf 1)
function red() {
echo -e "$RED$*$NORMAL"
}
function green() {
echo -e "$GREEN$*$NORMAL"
}
function yellow() {
echo -e "$YELLOW$*$NORMAL"
}
return_code=0
elasticsearch_url="localhost:9200"
logstash_url="localhost:9600"
until curl -s "$elasticsearch_url/_cluster/health?pretty" | grep '"status"' | grep -qE "green|yellow"; do
yellow "Waiting for Elasticsearch..."
sleep 5
done
green "✅ Elasticsearch status is green..."
count=0
until [[ $(curl -s "$logstash_url/_node/stats" | jq '.events.out') -ge 1236 ]]; do
yellow "Waiting for Logstash load to finish... $(curl -s "$logstash_url/_node/stats" | jq '.events.out') of 1236 (attempt $count of 60)"
((count++)) && ((count==60)) && break
sleep 5
done
if [[ count -le 60 && $(curl -s "$logstash_url/_node/stats" | jq '.events.out') -ge 1236 ]]; then
green "✅ Logstash load finished..."
else
red "❌ Logstash load didn't complete... $(curl -s "$logstash_url/_node/stats" | jq '.events.out')"
fi
count=0
until [[ $(curl -s "$elasticsearch_url/logstash-vulnwhisperer-2019.03/_count" | jq '.count') -ge 1232 ]] ; do
yellow "Waiting for Elasticsearch index to sync... $(curl -s "$elasticsearch_url/logstash-vulnwhisperer-2019.03/_count" | jq '.count') of 1232 logs loaded (attempt $count of 150)"
((count++)) && ((count==150)) && break
sleep 2
done
if [[ count -le 50 && $(curl -s "$elasticsearch_url/logstash-vulnwhisperer-2019.03/_count" | jq '.count') -ge 1232 ]]; then
green "✅ logstash-vulnwhisperer-2019.03 document count >= 1232"
else
red "❌ TIMED OUT waiting for logstash-vulnwhisperer-2019.03 document count: $(curl -s "$elasticsearch_url/logstash-vulnwhisperer-2019.03/_count" | jq) != 1232"
fi
# if [[ $(curl -s "$elasticsearch_url/logstash-vulnwhisperer-2019.03/_count" | jq '.count') == 1232 ]]; then
# green "✅ Passed: logstash-vulnwhisperer-2019.03 document count == 1232"
# else
# red "❌ Failed: logstash-vulnwhisperer-2019.03 document count == 1232 was: $(curl -s "$elasticsearch_url/logstash-vulnwhisperer-2019.03/_count") instead"
# ((return_code = return_code + 1))
# fi
# Test Nessus plugin_name:Backported Security Patch Detection (FTP)
nessus_doc=$(curl -s "$elasticsearch_url/logstash-vulnwhisperer-2019.03/_search?q=plugin_name:%22Backported%20Security%20Patch%20Detection%20(FTP)%22%20AND%20asset:176.28.50.164%20AND%20tags:nessus" | jq '.hits.hits[]._source')
if echo $nessus_doc | jq '.risk' | grep -q "None"; then
green "✅ Passed: Nessus risk == None"
else
red "❌ Failed: Nessus risk == None was: $(echo $nessus_doc | jq '.risk') instead"
((return_code = return_code + 1))
fi
# Test Tenable plugin_name:Backported Security Patch Detection (FTP)
tenable_doc=$(curl -s "$elasticsearch_url/logstash-vulnwhisperer-2019.03/_search?q=plugin_name:%22Backported%20Security%20Patch%20Detection%20(FTP)%22%20AND%20asset:176.28.50.164%20AND%20tags:tenable" | jq '.hits.hits[]._source')
# Test asset
if echo $tenable_doc | jq .asset | grep -q '176.28.50.164'; then
green "✅ Passed: Tenable asset == 176.28.50.164"
else
red "❌ Failed: Tenable asset == 176.28.50.164 was: $(echo $tenable_doc | jq .asset) instead"
((return_code = return_code + 1))
fi
# Test @timestamp
if echo $tenable_doc | jq '.["@timestamp"]' | grep -q '2019-03-30T15:45:44.000Z'; then
green "✅ Passed: Tenable @timestamp == 2019-03-30T15:45:44.000Z"
else
red "❌ Failed: Tenable @timestamp == 2019-03-30T15:45:44.000Z was: $(echo $tenable_doc | jq '.["@timestamp"]') instead"
((return_code = return_code + 1))
fi
# Test Qualys plugin_name:OpenSSL Multiple Remote Security Vulnerabilities
qualys_vuln_doc=$(curl -s "$elasticsearch_url/logstash-vulnwhisperer-2019.03/_search?q=tags:qualys_vuln%20AND%20ip:%22176.28.50.164%22%20AND%20plugin_name:%22OpenSSL%20Multiple%20Remote%20Security%20Vulnerabilities%22%20AND%20port:465" | jq '.hits.hits[]._source')
# Test @timestamp
if echo $qualys_vuln_doc | jq '.["@timestamp"]' | grep -q '2019-03-30T10:17:41.000Z'; then
green "✅ Passed: Qualys VM @timestamp == 2019-03-30T10:17:41.000Z"
else
red "❌ Failed: Qualys VM @timestamp == 2019-03-30T10:17:41.000Z was: $(echo $qualys_vuln_doc | jq '.["@timestamp"]') instead"
((return_code = return_code + 1))
fi
# Test @XXXX
if echo $qualys_vuln_doc | jq '.cvss' | grep -q '6.8'; then
green "✅ Passed: Qualys VM cvss == 6.8"
else
red "❌ Failed: Qualys VM cvss == 6.8 was: $(echo $qualys_vuln_doc | jq '.cvss') instead"
((return_code = return_code + 1))
fi
exit $return_code

97
tests/test-vuln_whisperer.sh Executable file
View File

@ -0,0 +1,97 @@
#!/usr/bin/env bash
NORMAL=$(tput sgr0)
GREEN=$(tput setaf 2)
YELLOW=$(tput setaf 3)
RED=$(tput setaf 1)
function red() {
echo -e "$RED$*$NORMAL"
}
function green() {
echo -e "$GREEN$*$NORMAL"
}
function yellow() {
echo -e "$YELLOW$*$NORMAL"
}
return_code=0
TEST_PATH=${TEST_PATH:-"tests/data"}
yellow "\n*********************************************"
yellow "* Test successful scan download and parsing *"
yellow "*********************************************"
rm -rf /opt/VulnWhisperer/*
if vuln_whisperer -F -c configs/test.ini --mock --mock_dir "${TEST_PATH}"; then
green "\n✅ Passed: Test successful scan download and parsing"
else
red "\n❌ Failed: Test successful scan download and parsing"
((return_code = return_code + 1))
fi
yellow "\n*********************************************"
yellow "* Test run with no scans to import *"
yellow "*********************************************"
if vuln_whisperer -F -c configs/test.ini --mock --mock_dir "${TEST_PATH}"; then
green "\n✅ Passed: Test run with no scans to import"
else
red "\n❌ Failed: Test run with no scans to import"
((return_code = return_code + 1))
fi
yellow "\n*********************************************"
yellow "* Test one failed scan *"
yellow "*********************************************"
rm -rf /opt/VulnWhisperer/*
yellow "Removing ${TEST_PATH}/nessus/GET_scans_exports_164_download"
mv "${TEST_PATH}/nessus/GET_scans_exports_164_download"{,.bak}
if vuln_whisperer -F -c configs/test.ini --mock --mock_dir "${TEST_PATH}"; [[ $? -eq 1 ]]; then
green "\n✅ Passed: Test one failed scan"
else
red "\n❌ Failed: Test one failed scan"
((return_code = return_code + 1))
fi
yellow "\n*********************************************"
yellow "* Test two failed scans *"
yellow "*********************************************"
rm -rf /opt/VulnWhisperer/*
yellow "Removing ${TEST_PATH}/qualys_vuln/scan_1553941061.87241"
mv "${TEST_PATH}/qualys_vuln/scan_1553941061.87241"{,.bak}
if vuln_whisperer -F -c configs/test.ini --mock --mock_dir "${TEST_PATH}"; [[ $? -eq 2 ]]; then
green "\n✅ Passed: Test two failed scans"
else
red "\n❌ Failed: Test two failed scans"
((return_code = return_code + 1))
fi
yellow "\n*********************************************"
yellow "* Test only nessus with one failed scan *"
yellow "*********************************************"
rm -rf /opt/VulnWhisperer/*
if vuln_whisperer -F -c configs/test.ini -s nessus --mock --mock_dir "${TEST_PATH}"; [[ $? -eq 1 ]]; then
green "\n✅ Passed: Test only nessus with one failed scan"
else
red "\n❌ Failed: Test only nessus with one failed scan"
((return_code = return_code + 1))
fi
yellow "*********************************************"
yellow "* Test only Qualys VM with one failed scan *"
yellow "*********************************************"
rm -rf /opt/VulnWhisperer/*
if vuln_whisperer -F -c configs/test.ini -s qualys_vuln --mock --mock_dir "${TEST_PATH}"; [[ $? -eq 1 ]]; then
green "\n✅ Passed: Test only Qualys VM with one failed scan"
else
red "\n❌ Failed: Test only Qualys VM with one failed scan"
((return_code = return_code + 1))
fi
# Restore the removed files
mv "${TEST_PATH}/qualys_vuln/scan_1553941061.87241.bak" "${TEST_PATH}/qualys_vuln/scan_1553941061.87241"
mv "${TEST_PATH}/nessus/GET_scans_exports_164_download.bak" "${TEST_PATH}/nessus/GET_scans_exports_164_download"
exit $return_code

View File

@ -1,9 +1,8 @@
import os
import sys
import logging
# Support for python3
if (sys.version_info > (3, 0)):
if sys.version_info > (3, 0):
import configparser as cp
else:
import ConfigParser as cp
@ -26,16 +25,16 @@ class vwConfig(object):
return self.config.getboolean(section, option)
def get_sections_with_attribute(self, attribute):
sections = []
sections = []
# TODO: does this not also need the "yes" case?
check = ["true", "True", "1"]
for section in self.config.sections():
check = ["true", "True", "1"]
for section in self.config.sections():
try:
if self.get(section, attribute) in check:
sections.append(section)
except:
self.logger.warn("Section {} has no option '{}'".format(section, attribute))
return sections
return sections
def exists_jira_profiles(self, profiles):
# get list of profiles source_scanner.scan_name
@ -45,7 +44,6 @@ class vwConfig(object):
return False
return True
def update_jira_profiles(self, profiles):
# create JIRA profiles in the ini config file
self.logger.debug('Updating Jira profiles: {}'.format(str(profiles)))
@ -59,27 +57,27 @@ class vwConfig(object):
except:
self.logger.warn("Creating config section for '{}'".format(section_name))
self.config.add_section(section_name)
self.config.set(section_name,'source',profile.split('.')[0])
self.config.set(section_name, 'source', profile.split('.')[0])
# in case any scan name contains '.' character
self.config.set(section_name,'scan_name','.'.join(profile.split('.')[1:]))
self.config.set(section_name,'jira_project', '')
self.config.set(section_name,'; if multiple components, separate by ","')
self.config.set(section_name,'components', '')
self.config.set(section_name,'; minimum criticality to report (low, medium, high or critical)')
self.config.set(section_name,'min_critical_to_report', 'high')
self.config.set(section_name,'; automatically report, boolean value ')
self.config.set(section_name,'autoreport', 'false')
self.config.set(section_name, 'scan_name', '.'.join(profile.split('.')[1:]))
self.config.set(section_name, 'jira_project', '')
self.config.set(section_name, '; if multiple components, separate by ","')
self.config.set(section_name, 'components', '')
self.config.set(section_name, '; minimum criticality to report (low, medium, high or critical)')
self.config.set(section_name, 'min_critical_to_report', 'high')
self.config.set(section_name, '; automatically report, boolean value ')
self.config.set(section_name, 'autoreport', 'false')
# TODO: try/catch this
# writing changes back to file
with open(self.config_in, 'w') as configfile:
self.config.write(configfile)
self.logger.debug('Written configuration to {}'.format(self.config_in))
# FIXME: this is the same as return None, that is the default return for return-less functions
return
def normalize_section(self, profile):
profile = "jira.{}".format(profile.lower().replace(" ","_"))
profile = "jira.{}".format(profile.lower().replace(" ", "_"))
self.logger.debug('Normalized profile as: {}'.format(profile))
return profile

View File

@ -1,17 +1,14 @@
import requests
from requests.packages.urllib3.exceptions import InsecureRequestWarning
requests.packages.urllib3.disable_warnings(InsecureRequestWarning)
import pytz
from datetime import datetime
import json
import logging
import sys
import time
import logging
from datetime import datetime
import pytz
import requests
from requests.packages.urllib3.exceptions import InsecureRequestWarning
requests.packages.urllib3.disable_warnings(InsecureRequestWarning)
requests.packages.urllib3.disable_warnings(InsecureRequestWarning)
class NessusAPI(object):
@ -39,7 +36,10 @@ class NessusAPI(object):
self.base = 'https://{hostname}:{port}'.format(hostname=hostname, port=port)
self.verbose = verbose
self.headers = {
self.session = requests.Session()
self.session.verify = False
self.session.stream = True
self.session.headers = {
'Origin': self.base,
'Accept-Encoding': 'gzip, deflate, br',
'Accept-Language': 'en-US,en;q=0.8',
@ -53,30 +53,28 @@ class NessusAPI(object):
}
self.login()
self.scans = self.get_scans()
self.scan_ids = self.get_scan_ids()
def login(self):
resp = self.get_token()
if resp.status_code is 200:
self.headers['X-Cookie'] = 'token={token}'.format(token=resp.json()['token'])
auth = '{"username":"%s", "password":"%s"}' % (self.user, self.password)
resp = self.request(self.SESSION, data=auth, json_output=False)
if resp.status_code == 200:
self.session.headers['X-Cookie'] = 'token={token}'.format(token=resp.json()['token'])
else:
raise Exception('[FAIL] Could not login to Nessus')
def request(self, url, data=None, headers=None, method='POST', download=False, json=False):
if headers is None:
headers = self.headers
def request(self, url, data=None, headers=None, method='POST', download=False, json_output=False):
timeout = 0
success = False
method = method.lower()
url = self.base + url
self.logger.debug('Requesting to url {}'.format(url))
methods = {'GET': requests.get,
'POST': requests.post,
'DELETE': requests.delete}
while (timeout <= 10) and (not success):
data = methods[method](url, data=data, headers=self.headers, verify=False)
if data.status_code == 401:
response = getattr(self.session, method)(url, data=data)
if response.status_code == 401:
if url == self.base + self.SESSION:
break
try:
@ -88,78 +86,35 @@ class NessusAPI(object):
else:
success = True
if json:
data = data.json()
if json_output:
return response.json()
if download:
self.logger.debug('Returning data.content')
return data.content
return data
def get_token(self):
auth = '{"username":"%s", "password":"%s"}' % (self.user, self.password)
token = self.request(self.SESSION, data=auth, json=False)
return token
def logout(self):
self.logger.debug('Logging out')
self.request(self.SESSION, method='DELETE')
def get_folders(self):
folders = self.request(self.FOLDERS, method='GET', json=True)
return folders
response_data = ''
count = 0
for chunk in response.iter_content(chunk_size=8192):
count += 1
if chunk:
response_data += chunk
self.logger.debug('Processed {} chunks'.format(count))
return response_data
return response
def get_scans(self):
scans = self.request(self.SCANS, method='GET', json=True)
scans = self.request(self.SCANS, method='GET', json_output=True)
return scans
def get_scan_ids(self):
scans = self.get_scans()
scans = self.scans
scan_ids = [scan_id['id'] for scan_id in scans['scans']] if scans['scans'] else []
self.logger.debug('Found {} scan_ids'.format(len(scan_ids)))
return scan_ids
def count_scan(self, scans, folder_id):
count = 0
for scan in scans:
if scan['folder_id'] == folder_id: count = count + 1
return count
def print_scans(self, data):
for folder in data['folders']:
self.logger.info("\\{0} - ({1})\\".format(folder['name'], self.count_scan(data['scans'], folder['id'])))
for scan in data['scans']:
if scan['folder_id'] == folder['id']:
self.logger.info("\t\"{0}\" - sid:{1} - uuid: {2}".format(scan['name'].encode('utf-8'), scan['id'], scan['uuid']))
def get_scan_details(self, scan_id):
data = self.request(self.SCAN_ID.format(scan_id=scan_id), method='GET', json=True)
return data
def get_scan_history(self, scan_id):
data = self.request(self.SCAN_ID.format(scan_id=scan_id), method='GET', json=True)
data = self.request(self.SCAN_ID.format(scan_id=scan_id), method='GET', json_output=True)
return data['history']
def get_scan_hosts(self, scan_id):
data = self.request(self.SCAN_ID.format(scan_id=scan_id), method='GET', json=True)
return data['hosts']
def get_host_vulnerabilities(self, scan_id, host_id):
query = self.HOST_VULN.format(scan_id=scan_id, host_id=host_id)
data = self.request(query, method='GET', json=True)
return data
def get_plugin_info(self, scan_id, host_id, plugin_id):
query = self.PLUGINS.format(scan_id=scan_id, host_id=host_id, plugin_id=plugin_id)
data = self.request(query, method='GET', json=True)
return data
def export_scan(self, scan_id, history_id):
data = {'format': 'csv'}
query = self.EXPORT_REPORT.format(scan_id=scan_id, history_id=history_id)
req = self.request(query, data=data, method='POST')
return req
def download_scan(self, scan_id=None, history=None, export_format="", chapters="", dbpasswd="", profile=""):
def download_scan(self, scan_id=None, history=None, export_format="", profile=""):
running = True
counter = 0
@ -169,7 +124,7 @@ class NessusAPI(object):
else:
query = self.EXPORT_HISTORY.format(scan_id=scan_id, history_id=history)
scan_id = str(scan_id)
req = self.request(query, data=json.dumps(data), method='POST', json=True)
req = self.request(query, data=json.dumps(data), method='POST', json_output=True)
try:
file_id = req['file']
token_id = req['token'] if 'token' in req else req['temp_token']
@ -180,7 +135,7 @@ class NessusAPI(object):
time.sleep(2)
counter += 2
report_status = self.request(self.EXPORT_STATUS.format(scan_id=scan_id, file_id=file_id), method='GET',
json=True)
json_output=True)
running = report_status['status'] != 'ready'
sys.stdout.write(".")
sys.stdout.flush()
@ -188,23 +143,12 @@ class NessusAPI(object):
if counter % 60 == 0:
self.logger.info("Completed: {}".format(counter))
self.logger.info("Done: {}".format(counter))
if profile=='tenable':
if profile == 'tenable':
content = self.request(self.EXPORT_FILE_DOWNLOAD.format(scan_id=scan_id, file_id=file_id), method='GET', download=True)
else:
content = self.request(self.EXPORT_TOKEN_DOWNLOAD.format(token_id=token_id), method='GET', download=True)
return content
@staticmethod
def merge_dicts(self, *dict_args):
"""
Given any number of dicts, shallow copy and merge into a new dict,
precedence goes to key value pairs in latter dicts.
"""
result = {}
for dictionary in dict_args:
result.update(dictionary)
return result
def get_utc_from_local(self, date_time, local_tz=None, epoch=True):
date_time = datetime.fromtimestamp(date_time)
if local_tz is None:

View File

@ -2,14 +2,13 @@
# -*- coding: utf-8 -*-
__author__ = 'Nathan Young'
import logging
import sys
import xml.etree.ElementTree as ET
import dateutil.parser as dp
import pandas as pd
import qualysapi
import requests
import sys
import logging
import os
import dateutil.parser as dp
class qualysWhisperAPI(object):
@ -25,12 +24,11 @@ class qualysWhisperAPI(object):
self.logger.info('Connected to Qualys at {}'.format(self.qgc.server))
except Exception as e:
self.logger.error('Could not connect to Qualys: {}'.format(str(e)))
# FIXME: exit(1) does not exist: either it's exit() or sys.exit(CODE)
exit(1)
sys.exit(1)
def scan_xml_parser(self, xml):
all_records = []
root = ET.XML(xml)
root = ET.XML(xml.encode("utf-8"))
for child in root.find('.//SCAN_LIST'):
all_records.append({
'name': child.find('TITLE').text,
@ -61,11 +59,12 @@ class qualysWhisperAPI(object):
'scan_ref': scan_id
}
scan_json = self.qgc.request(self.SCANS, parameters)
# First two columns are metadata we already have
# Last column corresponds to "target_distribution_across_scanner_appliances" element
# which doesn't follow the schema and breaks the pandas data manipulation
return pd.read_json(scan_json).iloc[2:-1]
# which doesn't follow the schema and breaks the pandas data manipulation
return pd.read_json(scan_json).iloc[2:-1]
class qualysUtils:
def __init__(self):
@ -78,15 +77,15 @@ class qualysUtils:
class qualysVulnScan:
def __init__(
self,
config=None,
file_in=None,
file_stream=False,
delimiter=',',
quotechar='"',
):
self,
config=None,
file_in=None,
file_stream=False,
delimiter=',',
quotechar='"',
):
self.logger = logging.getLogger('qualysVulnScan')
self.file_in = file_in
self.file_stream = file_stream
@ -111,7 +110,10 @@ class qualysVulnScan:
self.logger.info('Downloading scan ID: {}'.format(scan_id))
scan_report = self.qw.get_scan_details(scan_id=scan_id)
if not scan_report.empty:
keep_columns = ['category', 'cve_id', 'cvss3_base', 'cvss3_temporal', 'cvss_base', 'cvss_temporal', 'dns', 'exploitability', 'fqdn', 'impact', 'ip', 'ip_status', 'netbios', 'os', 'pci_vuln', 'port', 'protocol', 'qid', 'results', 'severity', 'solution', 'ssl', 'threat', 'title', 'type', 'vendor_reference']
keep_columns = ['category', 'cve_id', 'cvss3_base', 'cvss3_temporal', 'cvss_base',
'cvss_temporal', 'dns', 'exploitability', 'fqdn', 'impact', 'ip', 'ip_status',
'netbios', 'os', 'pci_vuln', 'port', 'protocol', 'qid', 'results', 'severity',
'solution', 'ssl', 'threat', 'title', 'type', 'vendor_reference']
scan_report = scan_report.filter(keep_columns)
scan_report['severity'] = scan_report['severity'].astype(int).astype(str)
scan_report['qid'] = scan_report['qid'].astype(int).astype(str)

View File

@ -42,43 +42,48 @@ class qualysWhisperAPI(object):
except Exception as e:
self.logger.error('Could not connect to Qualys: {}'.format(str(e)))
self.headers = {
"content-type": "text/xml"}
self.config_parse = qcconf.QualysConnectConfig(config)
#"content-type": "text/xml"}
"Accept" : "application/json",
"Content-Type": "application/json"}
self.config_parse = qcconf.QualysConnectConfig(config, 'qualys_web')
try:
self.template_id = self.config_parse.get_template_id()
except:
self.logger.error('Could not retrieve template ID')
def request(self, path, method='get', data=None):
methods = {'get': requests.get,
'post': requests.post}
base = 'https://' + self.qgc.server + path
req = methods[method](base, auth=self.qgc.auth, data=data, headers=self.headers).content
return req
def get_version(self):
return self.request(self.VERSION)
def get_scan_count(self, scan_name):
parameters = (
E.ServiceRequest(
E.filters(
E.Criteria({'field': 'name', 'operator': 'CONTAINS'}, scan_name))))
xml_output = self.qgc.request(self.COUNT_WEBAPP, parameters)
root = objectify.fromstring(xml_output)
return root.count.text
####
#### GET SCANS TO PROCESS
####
def get_was_scan_count(self, status):
"""
Checks number of scans, used to control the api limits
"""
parameters = (
E.ServiceRequest(
E.filters(
E.Criteria({'field': 'status', 'operator': 'EQUALS'}, status))))
xml_output = self.qgc.request(self.COUNT_WASSCAN, parameters)
root = objectify.fromstring(xml_output)
root = objectify.fromstring(xml_output.encode('utf-8'))
return root.count.text
def get_reports(self):
return self.qgc.request(self.SEARCH_REPORTS)
def generate_scan_result_XML(self, limit=1000, offset=1, status='FINISHED'):
report_xml = E.ServiceRequest(
E.filters(
E.Criteria({'field': 'status', 'operator': 'EQUALS'}, status
),
),
E.preferences(
E.startFromOffset(str(offset)),
E.limitResults(str(limit))
),
)
return report_xml
def get_scan_info(self, limit=1000, offset=1, status='FINISHED'):
""" Returns XML of ALL WAS Scans"""
data = self.generate_scan_result_XML(limit=limit, offset=offset, status=status)
return self.qgc.request(self.SEARCH_WAS_SCAN, data)
def xml_parser(self, xml, dupfield=None):
all_records = []
@ -98,54 +103,31 @@ class qualysWhisperAPI(object):
all_records.append(record)
return pd.DataFrame(all_records)
def get_report_list(self):
"""Returns a dataframe of reports"""
return self.xml_parser(self.get_reports(), dupfield='user_id')
def get_web_apps(self):
"""Returns webapps available for account"""
return self.qgc.request(self.SEARCH_WEB_APPS)
def get_web_app_list(self):
"""Returns dataframe of webapps"""
return self.xml_parser(self.get_web_apps(), dupfield='user_id')
def get_web_app_details(self, was_id):
"""Get webapp details - use to retrieve app ID tag"""
return self.qgc.request(self.GET_WEBAPP_DETAILS.format(was_id=was_id))
def get_scans_by_app_id(self, app_id):
data = self.generate_app_id_scan_XML(app_id)
return self.qgc.request(self.SEARCH_WAS_SCAN, data)
def get_scan_info(self, limit=1000, offset=1, status='FINISHED'):
""" Returns XML of ALL WAS Scans"""
data = self.generate_scan_result_XML(limit=limit, offset=offset, status=status)
return self.qgc.request(self.SEARCH_WAS_SCAN, data)
def get_all_scans(self, limit=1000, offset=1, status='FINISHED'):
qualys_api_limit = limit
dataframes = []
_records = []
total = int(self.get_was_scan_count(status=status))
self.logger.info('Retrieving information for {} scans'.format(total))
for i in range(0, total):
if i % limit == 0:
if (total - i) < limit:
qualys_api_limit = total - i
self.logger.info('Making a request with a limit of {} at offset {}'.format((str(qualys_api_limit), str(i + 1))))
scan_info = self.get_scan_info(limit=qualys_api_limit, offset=i + 1, status=status)
_records.append(scan_info)
self.logger.debug('Converting XML to DataFrame')
dataframes = [self.xml_parser(xml) for xml in _records]
try:
total = int(self.get_was_scan_count(status=status))
self.logger.error('Already have WAS scan count')
self.logger.info('Retrieving information for {} scans'.format(total))
for i in range(0, total):
if i % limit == 0:
if (total - i) < limit:
qualys_api_limit = total - i
self.logger.info('Making a request with a limit of {} at offset {}'.format((str(qualys_api_limit)), str(i + 1)))
scan_info = self.get_scan_info(limit=qualys_api_limit, offset=i + 1, status=status)
_records.append(scan_info)
self.logger.debug('Converting XML to DataFrame')
dataframes = [self.xml_parser(xml) for xml in _records]
except Exception as e:
self.logger.error("Couldn't process all scans: {}".format(e))
return pd.concat(dataframes, axis=0).reset_index().drop('index', axis=1)
def get_scan_details(self, scan_id):
return self.qgc.request(self.SCAN_DETAILS.format(scan_id=scan_id))
def get_report_details(self, report_id):
return self.qgc.request(self.REPORT_DETAILS.format(report_id=report_id))
####
#### CREATE VULNERABILITY REPORT AND DOWNLOAD IT
####
def get_report_status(self, report_id):
return self.qgc.request(self.REPORT_STATUS.format(report_id=report_id))
@ -153,30 +135,15 @@ class qualysWhisperAPI(object):
def download_report(self, report_id):
return self.qgc.request(self.REPORT_DOWNLOAD.format(report_id=report_id))
def download_scan_results(self, scan_id):
return self.qgc.request(self.SCAN_DOWNLOAD.format(scan_id=scan_id))
def generate_scan_result_XML(self, limit=1000, offset=1, status='FINISHED'):
report_xml = E.ServiceRequest(
E.filters(
E.Criteria({'field': 'status', 'operator': 'EQUALS'}, status
),
),
E.preferences(
E.startFromOffset(str(offset)),
E.limitResults(str(limit))
),
)
return report_xml
def generate_scan_report_XML(self, scan_id):
"""Generates a CSV report for an asset based on template defined in .ini file"""
report_xml = E.ServiceRequest(
E.data(
E.Report(
E.name('![CDATA[API Scan Report generated by VulnWhisperer]]>'),
E.name('<![CDATA[API Scan Report generated by VulnWhisperer]]>'),
E.description('<![CDATA[CSV Scanning report for VulnWhisperer]]>'),
E.format('CSV'),
#type is not needed, as the template already has it
E.type('WAS_SCAN_REPORT'),
E.template(
E.id(self.template_id)
@ -197,51 +164,13 @@ class qualysWhisperAPI(object):
)
return report_xml
def generate_webapp_report_XML(self, app_id):
"""Generates a CSV report for an asset based on template defined in .ini file"""
report_xml = E.ServiceRequest(
E.data(
E.Report(
E.name('![CDATA[API Web Application Report generated by VulnWhisperer]]>'),
E.description('<![CDATA[CSV WebApp report for VulnWhisperer]]>'),
E.format('CSV'),
E.template(
E.id(self.template_id)
),
E.config(
E.webAppReport(
E.target(
E.webapps(
E.WebApp(
E.id(app_id)
)
),
),
),
)
)
)
)
return report_xml
def generate_app_id_scan_XML(self, app_id):
report_xml = E.ServiceRequest(
E.filters(
E.Criteria({'field': 'webApp.id', 'operator': 'EQUALS'}, app_id
),
),
)
return report_xml
def create_report(self, report_id, kind='scan'):
mapper = {'scan': self.generate_scan_report_XML,
'webapp': self.generate_webapp_report_XML}
mapper = {'scan': self.generate_scan_report_XML}
try:
data = mapper[kind](report_id)
except Exception as e:
self.logger.error('Error creating report: {}'.format(str(e)))
return self.qgc.request(self.REPORT_CREATE, data)
return self.qgc.request(self.REPORT_CREATE, data).encode('utf-8')
def delete_report(self, report_id):
return self.qgc.request(self.DELETE_REPORT.format(report_id=report_id))
@ -359,237 +288,6 @@ class qualysUtils:
_data = reduce(lambda a, kv: a.replace(*kv), repls, str(_data))
return _data
class qualysWebAppReport:
# URL Vulnerability Information
WEB_APP_VULN_BLOCK = list(qualysReportFields.VULN_BLOCK)
WEB_APP_VULN_BLOCK.insert(0, 'Web Application Name')
WEB_APP_VULN_BLOCK.insert(WEB_APP_VULN_BLOCK.index('Ignored'), 'Status')
WEB_APP_VULN_HEADER = list(WEB_APP_VULN_BLOCK)
WEB_APP_VULN_HEADER[WEB_APP_VULN_BLOCK.index(qualysReportFields.CATEGORIES[0])] = \
'Vulnerability Category'
WEB_APP_SENSITIVE_HEADER = list(WEB_APP_VULN_HEADER)
WEB_APP_SENSITIVE_HEADER.insert(WEB_APP_SENSITIVE_HEADER.index('Url'
), 'Content')
WEB_APP_SENSITIVE_BLOCK = list(WEB_APP_SENSITIVE_HEADER)
WEB_APP_SENSITIVE_BLOCK[WEB_APP_SENSITIVE_BLOCK.index('Vulnerability Category'
)] = qualysReportFields.CATEGORIES[1]
WEB_APP_INFO_HEADER = list(qualysReportFields.INFO_HEADER)
WEB_APP_INFO_HEADER.insert(0, 'Web Application Name')
WEB_APP_INFO_BLOCK = list(qualysReportFields.INFO_BLOCK)
WEB_APP_INFO_BLOCK.insert(0, 'Web Application Name')
QID_HEADER = list(qualysReportFields.QID_HEADER)
GROUP_HEADER = list(qualysReportFields.GROUP_HEADER)
OWASP_HEADER = list(qualysReportFields.OWASP_HEADER)
WASC_HEADER = list(qualysReportFields.WASC_HEADER)
SCAN_META = list(qualysReportFields.SCAN_META)
CATEGORY_HEADER = list(qualysReportFields.CATEGORY_HEADER)
def __init__(
self,
config=None,
file_in=None,
file_stream=False,
delimiter=',',
quotechar='"',
):
self.logger = logging.getLogger('qualysWebAppReport')
self.file_in = file_in
self.file_stream = file_stream
self.report = None
self.utils = qualysUtils()
if config:
try:
self.qw = qualysWhisperAPI(config=config)
except Exception as e:
self.logger.error('Could not load config! Please check settings. Error: {}'.format(str(e)))
if file_stream:
self.open_file = file_in.splitlines()
elif file_in:
self.open_file = open(file_in, 'rb')
self.downloaded_file = None
def get_hostname(self, report):
host = ''
with open(report, 'rb') as csvfile:
q_report = csv.reader(csvfile, delimiter=',', quotechar='"')
for x in q_report:
if 'Web Application Name' in x[0]:
host = q_report.next()[0]
return host
def get_scanreport_name(self, report):
scan_name = ''
with open(report, 'rb') as csvfile:
q_report = csv.reader(csvfile, delimiter=',', quotechar='"')
for x in q_report:
if 'Scans' in x[0]:
scan_name = x[1]
return scan_name
def grab_sections(self, report):
all_dataframes = []
dict_tracker = {}
with open(report, 'rb') as csvfile:
dict_tracker['WEB_APP_VULN_BLOCK'] = pd.DataFrame(self.utils.grab_section(report,
self.WEB_APP_VULN_BLOCK,
end=[self.WEB_APP_SENSITIVE_BLOCK,
self.WEB_APP_INFO_BLOCK],
pop_last=True), columns=self.WEB_APP_VULN_HEADER)
dict_tracker['WEB_APP_SENSITIVE_BLOCK'] = pd.DataFrame(self.utils.grab_section(report,
self.WEB_APP_SENSITIVE_BLOCK,
end=[self.WEB_APP_INFO_BLOCK,
self.WEB_APP_SENSITIVE_BLOCK],
pop_last=True), columns=self.WEB_APP_SENSITIVE_HEADER)
dict_tracker['WEB_APP_INFO_BLOCK'] = pd.DataFrame(self.utils.grab_section(report,
self.WEB_APP_INFO_BLOCK,
end=[self.QID_HEADER],
pop_last=True), columns=self.WEB_APP_INFO_HEADER)
dict_tracker['QID_HEADER'] = pd.DataFrame(self.utils.grab_section(report,
self.QID_HEADER,
end=[self.GROUP_HEADER],
pop_last=True), columns=self.QID_HEADER)
dict_tracker['GROUP_HEADER'] = pd.DataFrame(self.utils.grab_section(report,
self.GROUP_HEADER,
end=[self.OWASP_HEADER],
pop_last=True), columns=self.GROUP_HEADER)
dict_tracker['OWASP_HEADER'] = pd.DataFrame(self.utils.grab_section(report,
self.OWASP_HEADER,
end=[self.WASC_HEADER],
pop_last=True), columns=self.OWASP_HEADER)
dict_tracker['WASC_HEADER'] = pd.DataFrame(self.utils.grab_section(report,
self.WASC_HEADER, end=[['APPENDIX']],
pop_last=True), columns=self.WASC_HEADER)
dict_tracker['CATEGORY_HEADER'] =pd.DataFrame(self.utils.grab_section(report,
self.CATEGORY_HEADER), columns=self.CATEGORY_HEADER)
all_dataframes.append(dict_tracker)
return all_dataframes
def data_normalizer(self, dataframes):
"""
Merge and clean data
:param dataframes:
:return:
"""
df_dict = dataframes[0]
merged_df = pd.concat([df_dict['WEB_APP_VULN_BLOCK'], df_dict['WEB_APP_SENSITIVE_BLOCK'],
df_dict['WEB_APP_INFO_BLOCK']], axis=0,
ignore_index=False)
merged_df = pd.merge(merged_df, df_dict['QID_HEADER'], left_on='QID',
right_on='Id')
merged_df = pd.concat([dataframes[0], dataframes[1],
dataframes[2]], axis=0,
ignore_index=False)
merged_df = pd.merge(merged_df, dataframes[3], left_on='QID',
right_on='Id')
if 'Content' not in merged_df:
merged_df['Content'] = ''
columns_to_cleanse = ['Payload #1', 'Request Method #1', 'Request URL #1',
'Request Headers #1', 'Response #1', 'Evidence #1',
'Description', 'Impact', 'Solution', 'Url', 'Content']
for col in columns_to_cleanse:
merged_df[col] = merged_df[col].astype(str).apply(self.utils.cleanser)
merged_df = pd.merge(merged_df, df_dict['CATEGORY_HEADER'])
merged_df = merged_df.drop(['QID_y', 'QID_x'], axis=1)
merged_df = merged_df.rename(columns={'Id': 'QID'})
merged_df = merged_df.replace('N/A','').fillna('')
try:
merged_df = \
merged_df[~merged_df.Title.str.contains('Links Crawled|External Links Discovered'
)]
except Exception as e:
self.logger.error('Error merging df: {}'.format(str(e)))
return merged_df
def download_file(self, file_id):
report = self.qw.download_report(file_id)
filename = str(file_id) + '.csv'
file_out = open(filename, 'w')
for line in report.splitlines():
file_out.write(line + '\n')
file_out.close()
self.logger.info('File written to {}'.format(filename))
return filename
def remove_file(self, filename):
os.remove(filename)
def process_data(self, file_id, scan=True, cleanup=True):
"""Downloads a file from qualys and normalizes it"""
download_file = self.download_file(file_id)
self.logger.info('Downloading file ID: {}'.format(file_id))
report_data = self.grab_sections(download_file)
merged_data = self.data_normalizer(report_data)
if scan:
scan_name = self.get_scanreport_name(download_file)
merged_data['ScanName'] = scan_name
# TODO cleanup old data (delete)
return merged_data
def whisper_reports(self, report_id, updated_date, cleanup=False):
"""
report_id: App ID
updated_date: Last time scan was ran for app_id
"""
vuln_ready = None
try:
if 'Z' in updated_date:
updated_date = self.utils.iso_to_epoch(updated_date)
report_name = 'qualys_web_' + str(report_id) \
+ '_{last_updated}'.format(last_updated=updated_date) \
+ '.csv'
if os.path.isfile(report_name):
self.logger.info('File already exists! Skipping...')
pass
else:
self.logger.info('Generating report for {}'.format(report_id))
status = self.qw.create_report(report_id)
root = objectify.fromstring(status)
if root.responseCode == 'SUCCESS':
self.logger.info('Successfully generated report for webapp: {}'.format(report_id))
generated_report_id = root.data.Report.id
self.logger.info('New Report ID: {}'.format(generated_report_id))
vuln_ready = self.process_data(generated_report_id)
vuln_ready.to_csv(report_name, index=False, header=True) # add when timestamp occured
self.logger.info('Report written to {}'.format(report_name))
if cleanup:
self.logger.info('Removing report {}'.format(generated_report_id))
cleaning_up = \
self.qw.delete_report(generated_report_id)
self.remove_file(str(generated_report_id) + '.csv')
self.logger.info('Deleted report: {}'.format(generated_report_id))
else:
self.logger.error('Could not process report ID: {}'.format(status))
except Exception as e:
self.logger.error('Could not process {}: {}'.format(report_id, e))
return vuln_ready
class qualysScanReport:
# URL Vulnerability Information
WEB_SCAN_VULN_BLOCK = list(qualysReportFields.VULN_BLOCK)
@ -730,6 +428,7 @@ class qualysScanReport:
merged_df = merged_df.drop(['QID_y', 'QID_x'], axis=1)
merged_df = merged_df.rename(columns={'Id': 'QID'})
merged_df = merged_df.assign(**df_dict['SCAN_META'].to_dict(orient='records')[0])
merged_df = pd.merge(merged_df, df_dict['CATEGORY_HEADER'], how='left', left_on=['Category', 'Severity Level'],
@ -739,8 +438,7 @@ class qualysScanReport:
try:
merged_df = \
merged_df[~merged_df.Title.str.contains('Links Crawled|External Links Discovered'
)]
merged_df[~merged_df.Title.str.contains('Links Crawled|External Links Discovered')]
except Exception as e:
self.logger.error('Error normalizing: {}'.format(str(e)))
return merged_df
@ -755,9 +453,6 @@ class qualysScanReport:
self.logger.info('File written to {}'.format(filename))
return filename
def remove_file(self, filename):
os.remove(filename)
def process_data(self, path='', file_id=None, cleanup=True):
"""Downloads a file from qualys and normalizes it"""
@ -766,62 +461,5 @@ class qualysScanReport:
report_data = self.grab_sections(download_file)
merged_data = self.data_normalizer(report_data)
merged_data.sort_index(axis=1, inplace=True)
# TODO cleanup old data (delete)
return merged_data
def whisper_reports(self, report_id, updated_date, cleanup=False):
"""
report_id: App ID
updated_date: Last time scan was ran for app_id
"""
vuln_ready = None
try:
if 'Z' in updated_date:
updated_date = self.utils.iso_to_epoch(updated_date)
report_name = 'qualys_web_' + str(report_id) \
+ '_{last_updated}'.format(last_updated=updated_date) \
+ '.csv'
if os.path.isfile(report_name):
self.logger.info('File already exist! Skipping...')
else:
self.logger.info('Generating report for {}'.format(report_id))
status = self.qw.create_report(report_id)
root = objectify.fromstring(status)
if root.responseCode == 'SUCCESS':
self.logger.info('Successfully generated report for webapp: {}'.format(report_id))
generated_report_id = root.data.Report.id
self.logger.info('New Report ID: {}'.format(generated_report_id))
vuln_ready = self.process_data(generated_report_id)
vuln_ready.to_csv(report_name, index=False, header=True) # add when timestamp occured
self.logger.info('Report written to {}'.format(report_name))
if cleanup:
self.logger.info('Removing report {} from disk'.format(generated_report_id))
cleaning_up = \
self.qw.delete_report(generated_report_id)
self.remove_file(str(generated_report_id) + '.csv')
self.logger.info('Deleted report from Qualys Database: {}'.format(generated_report_id))
else:
self.logger.error('Could not process report ID: {}'.format(status))
except Exception as e:
self.logger.error('Could not process {}: {}'.format(report_id, e))
return vuln_ready
maxInt = int(4000000)
maxSize = sys.maxsize
if maxSize > maxInt and type(maxSize) == int:
maxInt = maxSize
decrement = True
while decrement:
decrement = False
try:
csv.field_size_limit(maxInt)
except OverflowError:
maxInt = int(maxInt/10)
decrement = True

View File

@ -9,7 +9,7 @@ from bottle import template
import re
class JiraAPI(object):
def __init__(self, hostname=None, username=None, password=None, path="", debug=False, clean_obsolete=True, max_time_window=12):
def __init__(self, hostname=None, username=None, password=None, path="", debug=False, clean_obsolete=True, max_time_window=12, decommission_time_window=3):
self.logger = logging.getLogger('JiraAPI')
if debug:
self.logger.setLevel(logging.DEBUG)
@ -21,26 +21,40 @@ class JiraAPI(object):
self.jira = JIRA(options={'server': hostname}, basic_auth=(self.username, self.password))
self.logger.info("Created vjira service for {}".format(hostname))
self.all_tickets = []
self.excluded_tickets = []
self.JIRA_REOPEN_ISSUE = "Reopen Issue"
self.JIRA_CLOSE_ISSUE = "Close Issue"
self.max_time_tracking = max_time_window #in months
#<JIRA Resolution: name=u'Obsolete', id=u'11'>
self.JIRA_RESOLUTION_OBSOLETE = "Obsolete"
self.JIRA_RESOLUTION_FIXED = "Fixed"
self.clean_obsolete = clean_obsolete
self.template_path = 'vulnwhisp/reporting/resources/ticket.tpl'
self.max_ips_ticket = 30
self.attachment_filename = "vulnerable_assets.txt"
self.max_time_tracking = max_time_window #in months
if path:
self.download_tickets(path)
else:
self.logger.warn("No local path specified, skipping Jira ticket download.")
self.max_decommission_time = decommission_time_window #in months
# [HIGIENE] close tickets older than 12 months as obsolete (max_time_window defined)
if clean_obsolete:
self.close_obsolete_tickets()
# deletes the tag "server_decommission" from those tickets closed <=3 months ago
self.decommission_cleanup()
self.jira_still_vulnerable_comment = '''This ticket has been reopened due to the vulnerability not having been fixed (if multiple assets are affected, all need to be fixed; if the server is down, lastest known vulnerability might be the one reported).
- In the case of the team accepting the risk and wanting to close the ticket, please add the label "*risk_accepted*" to the ticket before closing it.
- If server has been decommissioned, please add the label "*server_decommission*" to the ticket before closing it.
- If when checking the vulnerability it looks like a false positive, _+please elaborate in a comment+_ and add the label "*false_positive*" before closing it; we will review it and report it to the vendor.
If you have further doubts, please contact the Security Team.'''
def create_ticket(self, title, desc, project="IS", components=[], tags=[]):
def create_ticket(self, title, desc, project="IS", components=[], tags=[], attachment_contents = []):
labels = ['vulnerability_management']
for tag in tags:
labels.append(str(tag))
self.logger.info("creating ticket for project {} title[20] {}".format(project, title[:20]))
self.logger.info("project {} has a component requirement: {}".format(project, self.PROJECT_COMPONENT_TABLE[project]))
self.logger.info("Creating ticket for project {} title: {}".format(project, title[:20]))
self.logger.debug("project {} has a component requirement: {}".format(project, components))
project_obj = self.jira.project(project)
components_ticket = []
for component in components:
@ -60,8 +74,12 @@ class JiraAPI(object):
issuetype={'name': 'Bug'},
labels=labels,
components=components_ticket)
self.logger.info("Ticket {} created successfully".format(new_issue))
if attachment_contents:
self.add_content_as_attachment(new_issue, attachment_contents)
return new_issue
#Basic JIRA Metrics
@ -82,46 +100,97 @@ class JiraAPI(object):
#JIRA structure of each vulnerability: [source, scan_name, title, diagnosis, consequence, solution, ips, risk, references]
self.logger.info("JIRA Sync started")
# [HIGIENE] close tickets older than 12 months as obsolete
# Higiene clean up affects to all tickets created by the module, filters by label 'vulnerability_management'
if self.clean_obsolete:
self.close_obsolete_tickets()
for vuln in vulnerabilities:
# JIRA doesn't allow labels with spaces, so making sure that the scan_name doesn't have spaces
# if it has, they will be replaced by "_"
if " " in vuln['scan_name']:
vuln['scan_name'] = "_".join(vuln['scan_name'].split(" "))
exists = False
to_update = False
ticketid = ""
ticket_assets = []
exists, to_update, ticketid, ticket_assets = self.check_vuln_already_exists(vuln)
# we exclude from the vulnerabilities to report those assets that already exist with *risk_accepted*/*server_decommission*
vuln = self.exclude_accepted_assets(vuln)
# make sure after exclusion of risk_accepted assets there are still assets
if vuln['ips']:
exists = False
to_update = False
ticketid = ""
ticket_assets = []
exists, to_update, ticketid, ticket_assets = self.check_vuln_already_exists(vuln)
if exists:
# If ticket "resolved" -> reopen, as vulnerability is still existent
self.reopen_ticket(ticketid)
self.add_label(ticketid, vuln['risk'])
continue
elif to_update:
self.ticket_update_assets(vuln, ticketid, ticket_assets)
self.add_label(ticketid, vuln['risk'])
continue
try:
tpl = template(self.template_path, vuln)
except Exception as e:
self.logger.error('Exception templating: {}'.format(str(e)))
return 0
self.create_ticket(title=vuln['title'], desc=tpl, project=project, components=components, tags=[vuln['source'], vuln['scan_name'], 'vulnerability', vuln['risk']])
if exists:
# If ticket "resolved" -> reopen, as vulnerability is still existent
self.reopen_ticket(ticketid=ticketid, comment=self.jira_still_vulnerable_comment)
self.add_label(ticketid, vuln['risk'])
continue
elif to_update:
self.ticket_update_assets(vuln, ticketid, ticket_assets)
self.add_label(ticketid, vuln['risk'])
continue
attachment_contents = []
# if assets >30, add as attachment
# create local text file with assets, attach it to ticket
if len(vuln['ips']) > self.max_ips_ticket:
attachment_contents = vuln['ips']
vuln['ips'] = ["Affected hosts ({assets}) exceed Jira's allowed character limit, added as an attachment.".format(assets = len(attachment_contents))]
try:
tpl = template(self.template_path, vuln)
except Exception as e:
self.logger.error('Exception templating: {}'.format(str(e)))
return 0
self.create_ticket(title=vuln['title'], desc=tpl, project=project, components=components, tags=[vuln['source'], vuln['scan_name'], 'vulnerability', vuln['risk']], attachment_contents = attachment_contents)
else:
self.logger.info("Ignoring vulnerability as all assets are already reported in a risk_accepted ticket")
self.close_fixed_tickets(vulnerabilities)
# we reinitialize so the next sync redoes the query with their specific variables
self.all_tickets = []
self.excluded_tickets = []
return True
def exclude_accepted_assets(self, vuln):
# we want to check JIRA tickets with risk_accepted/server_decommission or false_positive labels sharing the same source
# will exclude tickets older than 12 months, old tickets will get closed for higiene and recreated if still vulnerable
labels = [vuln['source'], vuln['scan_name'], 'vulnerability_management', 'vulnerability']
if not self.excluded_tickets:
jql = "{} AND labels in (risk_accepted,server_decommission, false_positive) AND NOT labels=advisory AND created >=startOfMonth(-{})".format(" AND ".join(["labels={}".format(label) for label in labels]), self.max_time_tracking)
self.excluded_tickets = self.jira.search_issues(jql, maxResults=0)
title = vuln['title']
#WARNING: function IGNORES DUPLICATES, after finding a "duplicate" will just return it exists
#it wont iterate over the rest of tickets looking for other possible duplicates/similar issues
self.logger.info("Comparing vulnerability to risk_accepted tickets")
assets_to_exclude = []
tickets_excluded_assets = []
for index in range(len(self.excluded_tickets)):
checking_ticketid, checking_title, checking_assets = self.ticket_get_unique_fields(self.excluded_tickets[index])
if title.encode('ascii') == checking_title.encode('ascii'):
if checking_assets:
#checking_assets is a list, we add to our full list for later delete all assets
assets_to_exclude+=checking_assets
tickets_excluded_assets.append(checking_ticketid)
if assets_to_exclude:
assets_to_remove = []
self.logger.warn("Vulnerable Assets seen on an already existing risk_accepted Jira ticket: {}".format(', '.join(tickets_excluded_assets)))
self.logger.debug("Original assets: {}".format(vuln['ips']))
#assets in vulnerability have the structure "ip - hostname - port", so we need to match by partial
for exclusion in assets_to_exclude:
# for efficiency, we walk the backwards the array of ips from the scanners, as we will be popping out the matches
# and we don't want it to affect the rest of the processing (otherwise, it would miss the asset right after the removed one)
for index in range(len(vuln['ips']))[::-1]:
if exclusion == vuln['ips'][index].split(" - ")[0]:
self.logger.debug("Deleting asset {} from vulnerability {}, seen in risk_accepted.".format(vuln['ips'][index], title))
vuln['ips'].pop(index)
self.logger.debug("Modified assets: {}".format(vuln['ips']))
return vuln
def check_vuln_already_exists(self, vuln):
'''
This function compares a vulnerability with a collection of tickets.
Returns [exists (bool), is equal (bool), ticketid (str), assets (array)]
'''
# we need to return if the vulnerability has already been reported and the ID of the ticket for further processing
#function returns array [duplicated(bool), update(bool), ticketid, ticket_assets]
title = vuln['title']
@ -140,9 +209,10 @@ class JiraAPI(object):
#WARNING: function IGNORES DUPLICATES, after finding a "duplicate" will just return it exists
#it wont iterate over the rest of tickets looking for other possible duplicates/similar issues
self.logger.info("Comparing Vulnerabilities to created tickets")
for index in range(len(self.all_tickets)-1):
for index in range(len(self.all_tickets)):
checking_ticketid, checking_title, checking_assets = self.ticket_get_unique_fields(self.all_tickets[index])
if title == checking_title:
# added "not risk_accepted", as if it is risk_accepted, we will create a new ticket excluding the accepted assets
if title.encode('ascii') == checking_title.encode('ascii') and not self.is_risk_accepted(self.jira.issue(checking_ticketid)):
difference = list(set(assets).symmetric_difference(checking_assets))
#to check intersection - set(assets) & set(checking_assets)
if difference:
@ -156,15 +226,79 @@ class JiraAPI(object):
def ticket_get_unique_fields(self, ticket):
title = ticket.raw.get('fields', {}).get('summary').encode("ascii").strip()
ticketid = ticket.key.encode("ascii")
assets = []
try:
affected_assets_section = ticket.raw.get('fields', {}).get('description').encode("ascii").split("{panel:title=Affected Assets}")[1].split("{panel}")[0]
assets = list(set(re.findall(r"\b\d{1,3}\.\d{1,3}\.\d{1,3}\.\d{1,3}\b", affected_assets_section)))
except:
self.logger.error("Ticket IPs regex failed. Ticket ID: {}".format(ticketid))
except Exception as e:
self.logger.error("Ticket IPs regex failed. Ticket ID: {}. Reason: {}".format(ticketid, e))
assets = []
try:
if not assets:
#check if attachment, if so, get assets from attachment
affected_assets_section = self.check_ips_attachment(ticket)
if affected_assets_section:
assets = list(set(re.findall(r"\b\d{1,3}\.\d{1,3}\.\d{1,3}\.\d{1,3}\b", affected_assets_section)))
except Exception as e:
self.logger.error("Ticket IPs Attachment regex failed. Ticket ID: {}. Reason: {}".format(ticketid, e))
return ticketid, title, assets
def check_ips_attachment(self, ticket):
affected_assets_section = []
try:
fields = self.jira.issue(ticket.key).raw.get('fields', {})
attachments = fields.get('attachment', {})
affected_assets_section = ""
#we will make sure we get the latest version of the file
latest = ''
attachment_id = ''
if attachments:
for item in attachments:
if item.get('filename') == self.attachment_filename:
if not latest:
latest = item.get('created')
attachment_id = item.get('id')
else:
if latest < item.get('created'):
latest = item.get('created')
attachment_id = item.get('id')
affected_assets_section = self.jira.attachment(attachment_id).get()
except Exception as e:
self.logger.error("Failed to get assets from ticket attachment. Ticket ID: {}. Reason: {}".format(ticket, e))
return affected_assets_section
def clean_old_attachments(self, ticket):
fields = ticket.raw.get('fields')
attachments = fields.get('attachment')
if attachments:
for item in attachments:
if item.get('filename') == self.attachment_filename:
self.jira.delete_attachment(item.get('id'))
def add_content_as_attachment(self, issue, contents):
try:
#Create the file locally with the data
attachment_file = open(self.attachment_filename, "w")
attachment_file.write("\n".join(contents))
attachment_file.close()
#Push the created file to the ticket
attachment_file = open(self.attachment_filename, "rb")
self.jira.add_attachment(issue, attachment_file, self.attachment_filename)
attachment_file.close()
#remove the temp file
os.remove(self.attachment_filename)
self.logger.info("Added attachment successfully.")
except:
self.logger.error("Error while attaching file to ticket.")
return False
return True
def get_ticket_reported_assets(self, ticket):
#[METRICS] return a list with all the affected assets for that vulnerability (including already resolved ones)
return list(set(re.findall(r"\b\d{1,3}\.\d{1,3}\.\d{1,3}\.\d{1,3}\b",str(self.jira.issue(ticket).raw))))
@ -180,7 +314,6 @@ class JiraAPI(object):
start = datetime(created[0],created[1],created[2],created[3],created[4],created[5])
end = datetime(resolved[0],resolved[1],resolved[2],resolved[3],resolved[4],resolved[5])
return (end-start).days
else:
self.logger.error("Ticket {ticket} is not resolved, can't calculate resolution time".format(ticket=ticket))
@ -191,50 +324,101 @@ class JiraAPI(object):
# correct description will always be in the vulnerability to report, only needed to update description to new one
self.logger.info("Ticket {} exists, UPDATE requested".format(ticketid))
if self.is_ticket_resolved(self.jira.issue(ticketid)):
self.reopen_ticket(ticketid)
#for now, if a vulnerability has been accepted ('accepted_risk'), ticket is completely ignored and not updated (no new assets)
#TODO when vulnerability accepted, create a new ticket with only the non-accepted vulnerable assets
#this would require go through the downloaded tickets, check duplicates/accepted ones, and if so,
#check on their assets to exclude them from the new ticket
risk_accepted = False
ticket_obj = self.jira.issue(ticketid)
if self.is_ticket_resolved(ticket_obj):
if self.is_risk_accepted(ticket_obj):
return 0
self.reopen_ticket(ticketid=ticketid, comment=self.jira_still_vulnerable_comment)
#First will do the comparison of assets
ticket_obj.update()
assets = list(set(re.findall(r"\b\d{1,3}\.\d{1,3}\.\d{1,3}\.\d{1,3}\b", ",".join(vuln['ips']))))
difference = list(set(assets).symmetric_difference(ticket_assets))
comment = ''
added = ''
removed = ''
#put a comment with the assets that have been added/removed
for asset in difference:
if asset in assets:
if not added:
added = '\nThe following assets *have been newly detected*:\n'
added += '* {}\n'.format(asset)
elif asset in ticket_assets:
if not removed:
removed= '\nThe following assets *have been resolved*:\n'
removed += '* {}\n'.format(asset)
comment = added + removed
#then will check if assets are too many that need to be added as an attachment
attachment_contents = []
if len(vuln['ips']) > self.max_ips_ticket:
attachment_contents = vuln['ips']
vuln['ips'] = ["Affected hosts ({assets}) exceed Jira's allowed character limit, added as an attachment.".format(assets = len(attachment_contents))]
#fill the ticket description template
try:
tpl = template(self.template_path, vuln)
except Exception as e:
self.logger.error('Exception updating assets: {}'.format(str(e)))
return 0
ticket_obj = self.jira.issue(ticketid)
ticket_obj.update()
assets = list(set(re.findall(r"\b\d{1,3}\.\d{1,3}\.\d{1,3}\.\d{1,3}\b", ",".join(vuln['ips']))))
difference = list(set(assets).symmetric_difference(ticket_assets))
comment = ''
#put a comment with the assets that have been added/removed
for asset in difference:
if asset in assets:
comment += "Asset {} have been added to the ticket as vulnerability *has been newly detected*.\n".format(asset)
elif asset in ticket_assets:
comment += "Asset {} have been removed from the ticket as vulnerability *has been resolved*.\n".format(asset)
#proceed checking if it requires adding as an attachment
try:
#update attachment with hosts and delete the old versions
if attachment_contents:
self.clean_old_attachments(ticket_obj)
self.add_content_as_attachment(ticket_obj, attachment_contents)
ticket_obj.update(description=tpl, comment=comment, fields={"labels":ticket_obj.fields.labels})
self.logger.info("Ticket {} updated successfully".format(ticketid))
self.add_label(ticketid, 'updated')
except:
self.logger.error("Error while trying up update ticket {}".format(ticketid))
except Exception as e:
self.logger.error("Error while trying up update ticket {ticketid}.\nReason: {e}".format(ticketid = ticketid, e=e))
return 0
def add_label(self, ticketid, label):
ticket_obj = self.jira.issue(ticketid)
if label not in ticket_obj.fields.labels:
ticket_obj.fields.labels.append(label)
if label not in [x.encode('utf8') for x in ticket_obj.fields.labels]:
ticket_obj.fields.labels.append(label)
try:
ticket_obj.update(fields={"labels":ticket_obj.fields.labels})
self.logger.info("Added label {label} to ticket {ticket}".format(label=label, ticket=ticketid))
except:
self.logger.error("Error while trying to add label {label} to ticket {ticket}".format(label=label, ticket=ticketid))
return 0
def remove_label(self, ticketid, label):
ticket_obj = self.jira.issue(ticketid)
if label in [x.encode('utf8') for x in ticket_obj.fields.labels]:
ticket_obj.fields.labels.remove(label)
try:
ticket_obj.update(fields={"labels":ticket_obj.fields.labels})
self.logger.info("Removed label {label} from ticket {ticket}".format(label=label, ticket=ticketid))
except:
self.logger.error("Error while trying to remove label {label} to ticket {ticket}".format(label=label, ticket=ticketid))
else:
self.logger.error("Error: label {label} not in ticket {ticket}".format(label=label, ticket=ticketid))
try:
ticket_obj.update(fields={"labels":ticket_obj.fields.labels})
self.logger.info("Added label {label} to ticket {ticket}".format(label=label, ticket=ticketid))
except:
self.logger.error("Error while trying to add label {label} to ticket {ticket}".format(label=label, ticket=ticketid))
return 0
def close_fixed_tickets(self, vulnerabilities):
# close tickets which vulnerabilities have been resolved and are still open
'''
Close tickets which vulnerabilities have been resolved and are still open.
Higiene clean up affects to all tickets created by the module, filters by label 'vulnerability_management'
'''
found_vulns = []
for vuln in vulnerabilities:
found_vulns.append(vuln['title'])
@ -287,28 +471,28 @@ class JiraAPI(object):
if "risk_accepted" in labels:
self.logger.warn("Ticket {} accepted risk, will be ignored".format(ticket_obj))
return True
elif "server_decomission" in labels:
self.logger.warn("Ticket {} server decomissioned, will be ignored".format(ticket_obj))
elif "server_decommission" in labels:
self.logger.warn("Ticket {} server decommissioned, will be ignored".format(ticket_obj))
return True
elif "false_positive" in labels:
self.logger.warn("Ticket {} flagged false positive, will be ignored".format(ticket_obj))
return True
self.logger.info("Ticket {} risk has not been accepted".format(ticket_obj))
return False
def reopen_ticket(self, ticketid):
def reopen_ticket(self, ticketid, ignore_labels=False, comment=""):
self.logger.debug("Ticket {} exists, REOPEN requested".format(ticketid))
# this will reopen a ticket by ticketid
ticket_obj = self.jira.issue(ticketid)
if self.is_ticket_resolved(ticket_obj):
if not self.is_risk_accepted(ticket_obj):
if (not self.is_risk_accepted(ticket_obj) or ignore_labels):
try:
if self.is_ticket_reopenable(ticket_obj):
comment = '''This ticket has been reopened due to the vulnerability not having been fixed (if multiple assets are affected, all need to be fixed; if the server is down, lastest known vulnerability might be the one reported).
In the case of the team accepting the risk and wanting to close the ticket, please add the label "*risk_accepted*" to the ticket before closing it.
If server has been decomissioned, please add the label "*server_decomission*" to the ticket before closing it.
If you have further doubts, please contact the Security Team.'''
error = self.jira.transition_issue(issue=ticketid, transition=self.JIRA_REOPEN_ISSUE, comment = comment)
self.logger.info("Ticket {} reopened successfully".format(ticketid))
self.add_label(ticketid, 'reopened')
if not ignore_labels:
self.add_label(ticketid, 'reopened')
return 1
except Exception as e:
# continue with ticket data so that a new ticket is created in place of the "lost" one
@ -341,8 +525,8 @@ class JiraAPI(object):
jql = "labels=vulnerability_management AND created <startOfMonth(-{}) and resolution=Unresolved".format(self.max_time_tracking)
tickets_to_close = self.jira.search_issues(jql, maxResults=0)
comment = '''This ticket is being closed for hygiene, as it is more than 12 months old.
If the vulnerability still exists, a new ticket will be opened.'''
comment = '''This ticket is being closed for hygiene, as it is more than {} months old.
If the vulnerability still exists, a new ticket will be opened.'''.format(self.max_time_tracking)
for ticket in tickets_to_close:
self.close_ticket(ticket, self.JIRA_RESOLUTION_OBSOLETE, comment)
@ -358,7 +542,9 @@ class JiraAPI(object):
return False
def download_tickets(self, path):
#saves all tickets locally, local snapshot of vulnerability_management ticktes
'''
saves all tickets locally, local snapshot of vulnerability_management ticktes
'''
#check if file already exists
check_date = str(date.today())
fname = '{}jira_{}.json'.format(path, check_date)
@ -382,3 +568,26 @@ class JiraAPI(object):
self.logger.error("Tickets could not be saved locally: {}.".format(e))
return False
def decommission_cleanup(self):
'''
deletes the server_decomission tag from those tickets that have been
closed already for more than x months (default is 3 months) in order to clean solved issues
for statistics purposes
'''
self.logger.info("Deleting 'server_decommission' tag from tickets closed more than {} months ago".format(self.max_decommission_time))
jql = "labels=vulnerability_management AND labels=server_decommission and resolutiondate <=startOfMonth(-{})".format(self.max_decommission_time)
decommissioned_tickets = self.jira.search_issues(jql, maxResults=0)
comment = '''This ticket is having deleted the *server_decommission* tag, as it is more than {} months old and is expected to already have been decommissioned.
If that is not the case and the vulnerability still exists, the vulnerability will be opened again.'''.format(self.max_decommission_time)
for ticket in decommissioned_tickets:
#we open first the ticket, as we want to make sure the process is not blocked due to
#an unexisting jira workflow or unallowed edit from closed tickets
self.reopen_ticket(ticketid=ticket, ignore_labels=True)
self.remove_label(ticket, 'server_decommission')
self.close_ticket(ticket, self.JIRA_RESOLUTION_FIXED, comment)
return 0

View File

@ -29,4 +29,6 @@ Please do not delete or modify the ticket assigned tags or title, as they are us
In the case of the team accepting the risk and wanting to close the ticket, please add the label "*risk_accepted*" to the ticket before closing it.
If server has been decomissioned, please add the label "*server_decomission*" to the ticket before closing it.
If server has been decommissioned, please add the label "*server_decommission*" to the ticket before closing it.
If when checking the vulnerability it looks like a false positive, _+please elaborate in a comment+_ and add the label "*false_positive*" before closing it; we will review it and report it to the vendor.

View File

76
vulnwhisp/test/mock.py Normal file
View File

@ -0,0 +1,76 @@
import os
import logging
import httpretty
class mockAPI(object):
def __init__(self, mock_dir=None, debug=False):
self.mock_dir = mock_dir
if not self.mock_dir:
# Try to guess the mock_dir if python setup.py develop was used
self.mock_dir = '/'.join(__file__.split('/')[:-3]) + '/tests/data'
self.logger = logging.getLogger('mockAPI')
if debug:
self.logger.setLevel(logging.DEBUG)
self.logger.info('mockAPI initialised, API requests will be mocked')
self.logger.debug('Test path resolved as {}'.format(self.mock_dir))
def get_directories(self, path):
dir, subdirs, files = next(os.walk(path))
return subdirs
def get_files(self, path):
dir, subdirs, files = next(os.walk(path))
return files
def qualys_vuln_callback(self, request, uri, response_headers):
self.logger.debug('Simulating response for {} ({})'.format(uri, request.body))
if 'list' in request.parsed_body['action']:
return [200,
response_headers,
open('{}/{}'.format(self.qualys_vuln_path, 'scans')).read()]
elif 'fetch' in request.parsed_body['action']:
try:
response_body = open('{}/{}'.format(
self.qualys_vuln_path,
request.parsed_body['scan_ref'][0].replace('/', '_'))
).read()
except:
# Can't find the file, just send an empty response
response_body = ''
return [200, response_headers, response_body]
def create_nessus_resource(self, framework):
for filename in self.get_files('{}/{}'.format(self.mock_dir, framework)):
method, resource = filename.split('_', 1)
resource = resource.replace('_', '/')
self.logger.debug('Adding mocked {} endpoint {} {}'.format(framework, method, resource))
httpretty.register_uri(
getattr(httpretty, method), 'https://{}:443/{}'.format(framework, resource),
body=open('{}/{}/{}'.format(self.mock_dir, framework, filename)).read()
)
def create_qualys_vuln_resource(self, framework):
# Create health check endpoint
self.logger.debug('Adding mocked {} endpoint {} {}'.format(framework, 'GET', 'msp/about.php'))
httpretty.register_uri(
httpretty.GET,
'https://{}:443/{}'.format(framework, 'msp/about.php'),
body='')
self.logger.debug('Adding mocked {} endpoint {} {}'.format(framework, 'POST', 'api/2.0/fo/scan'))
httpretty.register_uri(
httpretty.POST, 'https://{}:443/{}'.format(framework, 'api/2.0/fo/scan/'),
body=self.qualys_vuln_callback)
def mock_endpoints(self):
for framework in self.get_directories(self.mock_dir):
if framework in ['nessus', 'tenable']:
self.create_nessus_resource(framework)
elif framework == 'qualys_vuln':
self.qualys_vuln_path = self.mock_dir + '/' + framework
self.create_qualys_vuln_resource(framework)
httpretty.enable()

View File

@ -43,11 +43,11 @@ class vulnWhispererBase(object):
if self.CONFIG_SECTION is None:
raise Exception('Implementing class must define CONFIG_SECTION')
self.exit_code = 0
self.db_name = db_name
self.purge = purge
self.develop = develop
if config is not None:
self.config = vwConfig(config_in=config)
try:
@ -102,6 +102,7 @@ class vulnWhispererBase(object):
'source',
'uuid',
'processed',
'reported',
]
self.init()
@ -115,7 +116,7 @@ class vulnWhispererBase(object):
'CREATE TABLE IF NOT EXISTS scan_history (id INTEGER PRIMARY KEY,'
' scan_name TEXT, scan_id INTEGER, last_modified DATE, filename TEXT,'
' download_time DATE, record_count INTEGER, source TEXT,'
' uuid TEXT, processed INTEGER)'
' uuid TEXT, processed INTEGER, reported INTEGER)'
)
self.conn.commit()
@ -142,10 +143,36 @@ class vulnWhispererBase(object):
return data
def record_insert(self, record):
self.cur.execute('insert into scan_history({table_columns}) values (?,?,?,?,?,?,?,?,?)'.format(
table_columns=', '.join(self.table_columns)),
record)
self.conn.commit()
#for backwards compatibility with older versions without "reported" field
try:
#-1 to get the latest column, 1 to get the column name (old version would be "processed", new "reported")
#TODO delete backward compatibility check after some versions
last_column_table = self.cur.execute('PRAGMA table_info(scan_history)').fetchall()[-1][1]
if last_column_table == self.table_columns[-1]:
self.cur.execute('insert into scan_history({table_columns}) values (?,?,?,?,?,?,?,?,?,?)'.format(
table_columns=', '.join(self.table_columns)), record)
else:
self.cur.execute('insert into scan_history({table_columns}) values (?,?,?,?,?,?,?,?,?)'.format(
table_columns=', '.join(self.table_columns[:-1])), record[:-1])
self.conn.commit()
except Exception as e:
self.logger.error("Failed to insert record in database. Error: {}".format(e))
sys.exit(1)
def set_latest_scan_reported(self, filename):
#the reason to use the filename instead of the source/scan_name is because the filename already belongs to
#that latest scan, and we maintain integrity making sure that it is the exact scan we checked
try:
self.cur.execute('UPDATE scan_history SET reported = 1 WHERE filename="{}";'.format(filename))
self.conn.commit()
self.logger.info('Scan {} marked as successfully processed.'.format(filename))
return True
except Exception as e:
self.logger.error('Failed while setting scan with file {} as processed'.format(filename))
return False
def retrieve_uuids(self):
"""
@ -164,24 +191,34 @@ class vulnWhispererBase(object):
if not os.path.exists(self.write_path):
os.makedirs(self.write_path)
self.logger.info('Directory created at {scan} - Skipping creation'.format(
scan=self.write_path))
scan=self.write_path.encode('utf8')))
else:
os.path.exists(self.write_path)
self.logger.info('Directory already exist for {scan} - Skipping creation'.format(
scan=self.write_path))
scan=self.write_path.encode('utf8')))
def get_latest_results(self, source, scan_name):
processed = 0
results = []
try:
self.conn.text_factory = str
self.cur.execute('SELECT filename FROM scan_history WHERE source="{}" AND scan_name="{}" ORDER BY last_modified DESC LIMIT 1;'.format(source, scan_name))
#should always return just one filename
results = [r[0] for r in self.cur.fetchall()][0]
except:
results = []
return results
return True
#-1 to get the latest column, 1 to get the column name (old version would be "processed", new "reported")
#TODO delete backward compatibility check after some versions
last_column_table = self.cur.execute('PRAGMA table_info(scan_history)').fetchall()[-1][1]
if results and last_column_table == self.table_columns[-1]:
reported = self.cur.execute('SELECT reported FROM scan_history WHERE filename="{}"'.format(results)).fetchall()
reported = reported[0][0]
if reported:
self.logger.debug("Last downloaded scan from source {source} scan_name {scan_name} has already been reported".format(source=source, scan_name=scan_name))
except Exception as e:
self.logger.error("Error when getting latest results from {}.{} : {}".format(source, scan_name, e))
return results, reported
def get_scan_profiles(self):
# Returns a list of source.scan_name elements from the database
@ -302,7 +339,6 @@ class vulnWhispererNessus(vulnWhispererBase):
scan_records.append(record.copy())
except Exception as e:
# Generates error each time nonetype is encountered.
pass
if completed:
@ -312,7 +348,7 @@ class vulnWhispererNessus(vulnWhispererBase):
def whisper_nessus(self):
if self.nessus_connect:
scan_data = self.nessus.get_scans()
scan_data = self.nessus.scans
folders = scan_data['folders']
scans = scan_data['scans'] if scan_data['scans'] else []
all_scans = self.scan_count(scans)
@ -325,7 +361,7 @@ class vulnWhispererNessus(vulnWhispererBase):
if not scan_list:
self.logger.warn('No new scans to process. Exiting...')
return 0
return self.exit_code
# Create scan subfolders
@ -338,8 +374,7 @@ class vulnWhispererNessus(vulnWhispererBase):
else:
os.path.exists(self.path_check(f['name']))
self.logger.info('Directory already exist for {scan} - Skipping creation'.format(
scan=self.path_check(f['name'
])))
scan=self.path_check(f['name']).encode('utf8')))
# try download and save scans into each folder the belong to
@ -368,17 +403,16 @@ class vulnWhispererNessus(vulnWhispererBase):
# TODO Create directory sync function which scans the directory for files that exist already and populates the database
folder_id = s['folder_id']
scan_history = self.nessus.get_scan_history(scan_id)
if self.CONFIG_SECTION == 'tenable':
folder_name = ''
else:
folder_name = next(f['name'] for f in folders if f['id'] == folder_id)
if status == 'completed':
if status in ['completed', 'imported']:
file_name = '%s_%s_%s_%s.%s' % (scan_name, scan_id,
history_id, norm_time, 'csv')
repls = (('\\', '_'), ('/', '_'), ('/', '_'), (' ', '_'))
repls = (('\\', '_'), ('/', '_'), (' ', '_'))
file_name = reduce(lambda a, kv: a.replace(*kv), repls, file_name)
relative_path_name = self.path_check(folder_name + '/' + file_name)
relative_path_name = self.path_check(folder_name + '/' + file_name).encode('utf8')
if os.path.isfile(relative_path_name):
if self.develop:
@ -393,18 +427,24 @@ class vulnWhispererNessus(vulnWhispererBase):
self.CONFIG_SECTION,
uuid,
1,
0,
)
self.record_insert(record_meta)
self.logger.info('File {filename} already exist! Updating database'.format(filename=relative_path_name))
else:
file_req = \
self.nessus.download_scan(scan_id=scan_id, history=history_id,
export_format='csv', profile=self.CONFIG_SECTION)
try:
file_req = \
self.nessus.download_scan(scan_id=scan_id, history=history_id,
export_format='csv', profile=self.CONFIG_SECTION)
except Exception as e:
self.logger.error('Could not download {} scan {}: {}'.format(self.CONFIG_SECTION, scan_id, str(e)))
self.exit_code += 1
continue
clean_csv = \
pd.read_csv(io.StringIO(file_req.decode('utf-8'
)))
pd.read_csv(io.StringIO(file_req.decode('utf-8')))
if len(clean_csv) > 2:
self.logger.info('Processing {}/{} for scan: {}'.format(scan_count, len(scan_list), scan_name))
self.logger.info('Processing {}/{} for scan: {}'.format(scan_count, len(scan_list), scan_name.encode('utf8')))
columns_to_cleanse = ['CVSS','CVE','Description','Synopsis','Solution','See Also','Plugin Output']
for col in columns_to_cleanse:
@ -421,10 +461,11 @@ class vulnWhispererNessus(vulnWhispererBase):
self.CONFIG_SECTION,
uuid,
1,
0,
)
self.record_insert(record_meta)
self.logger.info('{filename} records written to {path} '.format(filename=clean_csv.shape[0],
path=file_name))
path=file_name.encode('utf8')))
else:
record_meta = (
scan_name,
@ -436,14 +477,16 @@ class vulnWhispererNessus(vulnWhispererBase):
self.CONFIG_SECTION,
uuid,
1,
0,
)
self.record_insert(record_meta)
self.logger.warn('{} has no host available... Updating database and skipping!'.format(file_name))
self.conn.close()
self.logger.info('Scan aggregation complete! Connection to database closed.')
else:
self.logger.error('Failed to use scanner at {host}:{port}'.format(host=self.hostname, port=self.nessus_port))
self.exit_code += 1
return self.exit_code
class vulnWhispererQualys(vulnWhispererBase):
@ -513,7 +556,6 @@ class vulnWhispererQualys(vulnWhispererBase):
if debug:
self.logger.setLevel(logging.DEBUG)
self.qualys_scan = qualysScanReport(config=config)
self.latest_scans = self.qualys_scan.qw.get_all_scans()
self.directory_check()
@ -539,7 +581,7 @@ class vulnWhispererQualys(vulnWhispererBase):
+ '_{last_updated}'.format(last_updated=launched_date) \
+ '.{extension}'.format(extension=output_format)
relative_path_name = self.path_check(report_name)
relative_path_name = self.path_check(report_name).encode('utf8')
if os.path.isfile(relative_path_name):
#TODO Possibly make this optional to sync directories
@ -554,6 +596,7 @@ class vulnWhispererQualys(vulnWhispererBase):
self.CONFIG_SECTION,
report_id,
1,
0,
)
self.record_insert(record_meta)
self.logger.info('File {filename} already exist! Updating database'.format(filename=relative_path_name))
@ -583,6 +626,7 @@ class vulnWhispererQualys(vulnWhispererBase):
self.CONFIG_SECTION,
report_id,
1,
0,
)
self.record_insert(record_meta)
@ -625,7 +669,7 @@ class vulnWhispererQualys(vulnWhispererBase):
for app in self.scans_to_process.iterrows():
counter += 1
r = app[1]
self.logger.debug('Processing {}/{}'.format(counter, len(self.scans_to_process)))
self.logger.info('Processing {}/{}'.format(counter, len(self.scans_to_process)))
self.whisper_reports(report_id=r['id'],
launched_date=r['launchedDate'],
scan_name=r['name'],
@ -633,7 +677,7 @@ class vulnWhispererQualys(vulnWhispererBase):
else:
self.logger.info('No new scans to process. Exiting...')
self.conn.close()
return 0
return self.exit_code
class vulnWhispererOpenVAS(vulnWhispererBase):
@ -679,6 +723,7 @@ class vulnWhispererOpenVAS(vulnWhispererBase):
if debug:
self.logger.setLevel(logging.DEBUG)
self.directory_check()
self.port = int(self.config.get(self.CONFIG_SECTION, 'port'))
self.develop = True
self.purge = purge
@ -698,7 +743,7 @@ class vulnWhispererOpenVAS(vulnWhispererBase):
report_name = 'openvas_scan_{scan_name}_{last_updated}.{extension}'.format(scan_name=scan_name,
last_updated=launched_date,
extension=output_format)
relative_path_name = self.path_check(report_name)
relative_path_name = self.path_check(report_name).encode('utf8')
scan_reference = report_id
if os.path.isfile(relative_path_name):
@ -714,6 +759,7 @@ class vulnWhispererOpenVAS(vulnWhispererBase):
self.CONFIG_SECTION,
report_id,
1,
0,
)
self.record_insert(record_meta)
self.logger.info('File {filename} already exist! Updating database'.format(filename=relative_path_name))
@ -767,7 +813,7 @@ class vulnWhispererOpenVAS(vulnWhispererBase):
else:
self.logger.info('No new scans to process. Exiting...')
self.conn.close()
return 0
return self.exit_code
class vulnWhispererQualysVuln(vulnWhispererBase):
@ -808,7 +854,6 @@ class vulnWhispererQualysVuln(vulnWhispererBase):
scan_reference=None,
output_format='json',
cleanup=True):
try:
launched_date
if 'Z' in launched_date:
launched_date = self.qualys_scan.utils.iso_to_epoch(launched_date)
@ -816,7 +861,7 @@ class vulnWhispererQualysVuln(vulnWhispererBase):
+ '_{last_updated}'.format(last_updated=launched_date) \
+ '.json'
relative_path_name = self.path_check(report_name)
relative_path_name = self.path_check(report_name).encode('utf8')
if os.path.isfile(relative_path_name):
#TODO Possibly make this optional to sync directories
@ -831,42 +876,44 @@ class vulnWhispererQualysVuln(vulnWhispererBase):
self.CONFIG_SECTION,
report_id,
1,
0,
)
self.record_insert(record_meta)
self.logger.info('File {filename} already exist! Updating database'.format(filename=relative_path_name))
else:
self.logger.info('Processing report ID: {}'.format(report_id))
vuln_ready = self.qualys_scan.process_data(scan_id=report_id)
if not vuln_ready.empty:
try:
self.logger.info('Processing report ID: {}'.format(report_id))
vuln_ready = self.qualys_scan.process_data(scan_id=report_id)
vuln_ready['scan_name'] = scan_name
vuln_ready['scan_reference'] = report_id
vuln_ready.rename(columns=self.COLUMN_MAPPING, inplace=True)
except Exception as e:
self.logger.error('Could not process {}: {}'.format(report_id, str(e)))
self.exit_code += 1
return self.exit_code
record_meta = (
scan_name,
scan_reference,
launched_date,
report_name,
time.time(),
vuln_ready.shape[0],
self.CONFIG_SECTION,
report_id,
1,
)
self.record_insert(record_meta)
record_meta = (
scan_name,
scan_reference,
launched_date,
report_name,
time.time(),
vuln_ready.shape[0],
self.CONFIG_SECTION,
report_id,
1,
0,
)
self.record_insert(record_meta)
if output_format == 'json':
with open(relative_path_name, 'w') as f:
f.write(vuln_ready.to_json(orient='records', lines=True))
f.write('\n')
if output_format == 'json':
with open(relative_path_name, 'w') as f:
f.write(vuln_ready.to_json(orient='records', lines=True))
f.write('\n')
self.logger.info('Report written to {}'.format(report_name))
else:
return False
except Exception as e:
self.logger.error('Could not process {}: {}'.format(report_id, str(e)))
self.logger.info('Report written to {}'.format(report_name))
return self.exit_code
def identify_scans_to_process(self):
@ -887,15 +934,15 @@ class vulnWhispererQualysVuln(vulnWhispererBase):
for app in self.scans_to_process.iterrows():
counter += 1
r = app[1]
self.logger.debug('Processing {}/{}'.format(counter, len(self.scans_to_process)))
self.whisper_reports(report_id=r['id'],
self.logger.info('Processing {}/{}'.format(counter, len(self.scans_to_process)))
self.exit_code += self.whisper_reports(report_id=r['id'],
launched_date=r['date'],
scan_name=r['name'],
scan_reference=r['type'])
else:
self.logger.info('No new scans to process. Exiting...')
self.conn.close()
return 0
return self.exit_code
class vulnWhispererJIRA(vulnWhispererBase):
@ -975,7 +1022,7 @@ class vulnWhispererJIRA(vulnWhispererBase):
sys.exit(0)
#datafile path
filename = self.get_latest_results(source, scan_name)
filename, reported = self.get_latest_results(source, scan_name)
fullpath = ""
# search data files under user specified directory
@ -983,11 +1030,24 @@ class vulnWhispererJIRA(vulnWhispererBase):
if filename in filenames:
fullpath = "{}/{}".format(root,filename)
if not fullpath:
self.logger.error('Scan file path "{scan_name}" for source "{source}" has not been found.'.format(scan_name=scan_name, source=source))
sys.exit(1)
if reported:
self.logger.warn('Last Scan of "{scan_name}" for source "{source}" has already been reported; will be skipped.'.format(scan_name=scan_name, source=source))
return [False] * 5
return project, components, fullpath, min_critical
if not fullpath:
self.logger.error('Scan of "{scan_name}" for source "{source}" has not been found. Please check that the scanner data files are in place.'.format(scan_name=scan_name, source=source))
sys.exit(1)
dns_resolv = self.config.get('jira','dns_resolv')
if dns_resolv in ('False', 'false', ''):
dns_resolv = False
elif dns_resolv in ('True', 'true'):
dns_resolv = True
else:
self.logger.error("dns_resolv variable not setup in [jira] section; will not do dns resolution")
dns_resolv = False
return project, components, fullpath, min_critical, dns_resolv
def parse_nessus_vulnerabilities(self, fullpath, source, scan_name, min_critical):
@ -1036,7 +1096,7 @@ class vulnWhispererJIRA(vulnWhispererBase):
return vulnerabilities
def parse_qualys_vuln_vulnerabilities(self, fullpath, source, scan_name, min_critical):
def parse_qualys_vuln_vulnerabilities(self, fullpath, source, scan_name, min_critical, dns_resolv = False):
#parsing of the qualys vulnerabilities schema
#parse json
vulnerabilities = []
@ -1044,8 +1104,12 @@ class vulnWhispererJIRA(vulnWhispererBase):
risks = ['info', 'low', 'medium', 'high', 'critical']
# +1 as array is 0-4, but score is 1-5
min_risk = int([i for i,x in enumerate(risks) if x == min_critical][0])+1
data=[json.loads(line) for line in open(fullpath).readlines()]
try:
data=[json.loads(line) for line in open(fullpath).readlines()]
except Exception as e:
self.logger.warn("Scan has no vulnerabilities, skipping.")
return vulnerabilities
#qualys fields we want - []
for index in range(len(data)):
@ -1069,7 +1133,7 @@ class vulnWhispererJIRA(vulnWhispererBase):
vuln['ips'] = []
#TODO ADDED DNS RESOLUTION FROM QUALYS! \n SEPARATORS INSTEAD OF \\n!
vuln['ips'].append("{ip} - {protocol}/{port} - {dns}".format(**self.get_asset_fields(data[index])))
vuln['ips'].append("{ip} - {protocol}/{port} - {dns}".format(**self.get_asset_fields(data[index], dns_resolv)))
#different risk system than Nessus!
vuln['risk'] = risks[int(data[index]['risk'])-1]
@ -1084,31 +1148,32 @@ class vulnWhispererJIRA(vulnWhispererBase):
# grouping assets by vulnerability to open on single ticket, as each asset has its own nessus entry
for vuln in vulnerabilities:
if vuln['title'] == data[index]['plugin_name']:
vuln['ips'].append("{ip} - {protocol}/{port} - {dns}".format(**self.get_asset_fields(data[index])))
vuln['ips'].append("{ip} - {protocol}/{port} - {dns}".format(**self.get_asset_fields(data[index], dns_resolv)))
return vulnerabilities
def get_asset_fields(self, vuln):
def get_asset_fields(self, vuln, dns_resolv):
values = {}
values['ip'] = vuln['ip']
values['protocol'] = vuln['protocol']
values['port'] = vuln['port']
values['dns'] = ''
if vuln['dns']:
values['dns'] = vuln['dns']
else:
if values['ip'] in self.host_resolv_cache.keys():
self.logger.debug("Hostname from {ip} cached, retrieving from cache.".format(ip=values['ip']))
values['dns'] = self.host_resolv_cache[values['ip']]
if dns_resolv:
if vuln['dns']:
values['dns'] = vuln['dns']
else:
self.logger.debug("No hostname, trying to resolve {ip}'s hostname.".format(ip=values['ip']))
try:
values['dns'] = socket.gethostbyaddr(vuln['ip'])[0]
self.host_resolv_cache[values['ip']] = values['dns']
self.logger.debug("Hostname found: {hostname}.".format(hostname=values['dns']))
except:
self.host_resolv_cache[values['ip']] = ''
self.logger.debug("Hostname not found for: {ip}.".format(ip=values['ip']))
if values['ip'] in self.host_resolv_cache.keys():
self.logger.debug("Hostname from {ip} cached, retrieving from cache.".format(ip=values['ip']))
values['dns'] = self.host_resolv_cache[values['ip']]
else:
self.logger.debug("No hostname, trying to resolve {ip}'s hostname.".format(ip=values['ip']))
try:
values['dns'] = socket.gethostbyaddr(vuln['ip'])[0]
self.host_resolv_cache[values['ip']] = values['dns']
self.logger.debug("Hostname found: {hostname}.".format(hostname=values['dns']))
except:
self.host_resolv_cache[values['ip']] = ''
self.logger.debug("Hostname not found for: {ip}.".format(ip=values['ip']))
for key in values.keys():
if not values[key]:
@ -1126,7 +1191,11 @@ class vulnWhispererJIRA(vulnWhispererBase):
def jira_sync(self, source, scan_name):
self.logger.info("Jira Sync triggered for source '{source}' and scan '{scan_name}'".format(source=source, scan_name=scan_name))
project, components, fullpath, min_critical = self.get_env_variables(source, scan_name)
project, components, fullpath, min_critical, dns_resolv = self.get_env_variables(source, scan_name)
if not project:
self.logger.debug("Skipping scan for source '{source}' and scan '{scan_name}': vulnerabilities have already been reported.".format(source=source, scan_name=scan_name))
return False
vulnerabilities = []
@ -1136,7 +1205,7 @@ class vulnWhispererJIRA(vulnWhispererBase):
#***Qualys VM parsing***
if source == "qualys_vuln":
vulnerabilities = self.parse_qualys_vuln_vulnerabilities(fullpath, source, scan_name, min_critical)
vulnerabilities = self.parse_qualys_vuln_vulnerabilities(fullpath, source, scan_name, min_critical, dns_resolv)
#***JIRA sync***
if vulnerabilities:
@ -1145,9 +1214,11 @@ class vulnWhispererJIRA(vulnWhispererBase):
self.jira.sync(vulnerabilities, project, components)
else:
self.logger.info("Vulnerabilities from {source} has not been parsed! Exiting...".format(source=source))
sys.exit(0)
self.logger.info("[{source}.{scan_name}] No vulnerabilities or vulnerabilities not parsed.".format(source=source, scan_name=scan_name))
self.set_latest_scan_reported(fullpath.split("/")[-1])
return False
self.set_latest_scan_reported(fullpath.split("/")[-1])
return True
def sync_all(self):
@ -1180,6 +1251,7 @@ class vulnWhisperer(object):
self.verbose = verbose
self.source = source
self.scanname = scanname
self.exit_code = 0
def whisper_vulnerabilities(self):
@ -1190,15 +1262,15 @@ class vulnWhisperer(object):
password=self.password,
verbose=self.verbose,
profile=self.profile)
vw.whisper_nessus()
self.exit_code += vw.whisper_nessus()
elif self.profile == 'qualys_web':
vw = vulnWhispererQualys(config=self.config)
vw.process_web_assets()
self.exit_code += vw.process_web_assets()
elif self.profile == 'openvas':
vw_openvas = vulnWhispererOpenVAS(config=self.config)
vw_openvas.process_openvas_scans()
self.exit_code += vw_openvas.process_openvas_scans()
elif self.profile == 'tenable':
vw = vulnWhispererNessus(config=self.config,
@ -1206,11 +1278,11 @@ class vulnWhisperer(object):
password=self.password,
verbose=self.verbose,
profile=self.profile)
vw.whisper_nessus()
self.exit_code += vw.whisper_nessus()
elif self.profile == 'qualys_vuln':
vw = vulnWhispererQualysVuln(config=self.config)
vw.process_vuln_scans()
self.exit_code += vw.process_vuln_scans()
elif self.profile == 'jira':
#first we check config fields are created, otherwise we create them
@ -1224,3 +1296,5 @@ class vulnWhisperer(object):
return 0
else:
vw.jira_sync(self.source, self.scanname)
return self.exit_code