Skip to content

Commit

Permalink
Add relationship descriptions (#40)
Browse files Browse the repository at this point in the history
* adding update to spec

* Update README.md

* add description to relationships

* removing ctibutler

* add description to more rels #28

* Update ai_session.py

* Update ai_session.py

* Update ai_session.py

* adding vulmatch and ctibutler spec

* Update stix-mapping.md

* reordering tests directory

* fixing some lookups

* Update test_cases.yaml

* change remote lookup #36+#31

* replace txt2stix notes with DEBUG log #35

* add --report_id flag #33

---------

Co-authored-by: Fadl <[email protected]>
  • Loading branch information
himynamesdave and fqrious authored Oct 27, 2024
1 parent 2814534 commit 63ff37c
Show file tree
Hide file tree
Showing 134 changed files with 6,266 additions and 6,846 deletions.
12 changes: 9 additions & 3 deletions .env.sample
Original file line number Diff line number Diff line change
@@ -1,8 +1,14 @@
INPUT_TOKEN_LIMIT=50 # [REQUIRED] keep in mind the token limit for selected model (which includes both input AND output tokens). For example, if your input limit is 50,000 characters, this could incur up to 25,000 tokens. Assuming your selected model allows for 64,000 tokens, you will therefore be able to obtain an output of over 39,000 tokens.
INPUT_TOKEN_LIMIT=50 # [REQUIRED] for AI modes. keep in mind the token limit for selected model (which includes both input AND output tokens). For example, if your input limit is 50,000 characters, this could incur up to 25,000 tokens. Assuming your selected model allows for 64,000 tokens, you will therefore be able to obtain an output of over 39,000 tokens.
OPENAI_API_KEY= # [REQUIRED IF USING AI MODES] get it from https://platform.openai.com/api-keys
OPENAI_MODEL=gpt-4 # [REQUIRED IF USING AI MODES] choose an OpenAI model of your choice. Ensure the input/output token count meets requirements (and adjust INPUT_TOKEN_LIMIT accordingly). List of models here: https://platform.openai.com/docs/models
ARANGODB_HOST_URL=https://database.ctibutler.com:8529 # [REQUIRED] user can also self host
ARANGODB_USERNAME= # [REQUIRED] user must have write privileges
ARANGODB_USERNAME= # [REQUIRED] user must have write privileges to the ARANGODB_DATABASE specified
ARANGODB_PASSWORD= # [REQUIRED] password for specified username
ARANGODB_DATABASE=cti_database # [REQUIRED] database where collections are held, if using CTI Butler is cti_database
BIN_LIST_API_KEY= #[OPTIONAL] needed for extracting credit card information
BIN_LIST_API_KEY= #[OPTIONAL] needed for extracting credit card information
## CTIBUTLER FOR ATT&CK, CAPEC, AND CWE LOOKUPS
CTIBUTLER_HOST= # [REQUIRED] e.g. http://localhost:8006/
CTIBUTLER_APIKEY= #[OPTIONAL] if using https://app.ctibutler.com
## VULMATCH FOR CVE AD CPE LOOKUPS
VULMATCH_HOST= # [REQUIRED] e.g. http://localhost:8005/
VULMATCH_APIKEY= #[OPTIONAL] if using https://app.vulmatch.com
112 changes: 10 additions & 102 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -2,107 +2,29 @@

## Overview

![txt2stix](docs/txt2stix.png)

txt2stix is a Python script that is designed to identify and extract IoCs and TTPs from text files, identify the relationships between them, convert them to STIX 2.1 objects, and output as a STIX 2.1 bundle.

The general design goal of txt2stix was to keep it flexible, but simple, so that new extractions could be added or modified over time.

In short txt2stix;

1. takes a txt file input
2. rewrites file with enabled aliases
3. extracts observables for enabled extractions (and ignores any whitelisted values)
4. converts extracted observables to STIX 2.1 objects
5. generates the relationships between extracted observables
6. converts extracted relationships to STIX 2.1 SRO objects
7. outputs a STIX 2.1 bundle
2. (optional) rewrites file with enabled aliases
3. extracts observables for enabled extractions (ai, pattern, or lookup)
4. (optional) removes any extractions that match whitelists
5. converts extracted observables to STIX 2.1 objects
6. generates the relationships between extracted observables (ai, standard)
7. converts extracted relationships to STIX 2.1 SRO objects
8. outputs a STIX 2.1 bundle

## tl;dr

[![txt2stix](https://img.youtube.com/vi/TWVGCou9oGk/0.jpg)](https://www.youtube.com/watch?v=TWVGCou9oGk)

[Watch the demo](https://www.youtube.com/watch?v=TWVGCou9oGk).

## The problem

More-and-more organisations are standardising the way the represent threat intelligence using the STIX 2.1 data model.

As a result, an increasing number of SIEMs, SOARs, TIPs, etc. have native STIX 2.1 support.

However, authoring STIX 2.1 content can be laborious. I have seen analysts manually copy and paste data from reports, blogs, emails, and other sources into STIX 2.1 Objects.

In many cases these Observables (IOCs) can be automatically detected in plain text using defined patterns.

For example, an IPv4 observable has a specific pattern that can be identified using regular expressions. This regular expression will match an IPv4 observable;

```regex
^(?:[0-9]{1,3}\.){3}[0-9]{1,3}$
```

Similarly, the following regular expression will capture URLs;

```regex
^(https?|ftp|file)://.+$
```

Both of these examples ([here](https://www.oreilly.com/library/view/regular-expressions-cookbook/9780596802837/ch07s16.html) and [here](https://www.oreilly.com/library/view/regular-expressions-cookbook/9781449327453/ch08s01.html), respectively) are taken from the brilliant Regular Expressions Cookbook (2nd edition) by Jan Goyvaerts and Steven Levithan.

Now this isn't rocket science, and indeed there are already quite a few open source tools that contain regular expressions for extracting Observables in this way;

* [IoC extractor](https://github.com/ninoseki/ioc-extractor): An npm package for extracting common IoC (Indicator of Compromise)
* [IOC Finder](https://github.com/fhightower/ioc-finder): Simple, effective, and modular package for parsing Observables (indicators of compromise (IOCs), network data, and other, security related information) from text.
* [cacador](https://github.com/sroberts/cacador): Indicator Extractor
* [iocextract](https://github.com/InQuest/python-iocextract): Defanged Indicator of Compromise (IOC) Extractor.
* [Cyobstract](https://github.com/cmu-sei/cyobstract): A tool to extract structured cyber information from incident reports.

However, we wanted a more modularised extraction logic, especially to take advantage of the new accessibility of AI.

## Concepts

Here is an overview of how the txt2stix processes txt files into STIX 2.1 bundles:

https://miro.com/app/board/uXjVKEyFzB8=/

### Extractions

This is the logic that actually extracts the text from the input document.

txt2stix has 3 types of extracions;

1. AI: uses an LLM to extracts the data based on a prompt
* when to use: contextual types data that can't be easily detected using patterns
* when not to use: when costs are an issue, when user will not review output for errors
2. Pattern: all extractions will be performed by regular expressions (or via existing Python libraries).
* when to use: for pattern based
* when not to use: when costs are an issue, where user will not
3. Lookups: file2txt will compare strings in input document against a list of strings in lookups
* when to use: for specialist data not easily detected in patterns
* when not to use: for large amounts of data (in the lookup)

### Relationships

This is how extractions are joined together using STIX SROs.

There are 2 relationship modes in txt2stix;

* `ai`: Rich relationships created by LLM between extractions.
* `standard`: Basic relationships created from extractions back to master Report object generated.

### Aliases

In many cases two extractions might be related to the same thing. For example, the extraction `USA` and `United States` and `United States of America` are all referring to the same thing.

Aliases normalise the input text before extractions happen so that the same extraction is used. e.g. changing `United States` -> `USA`.

### Whitelists

In many cases files will have IoC extractions that are not malicious. e.g. `google.com` (and thus they don't want them to appear in final bundle).

Whitelists provide a list of values to be compared to extractions. If a whitelist value matches an extraction, that extraction is removed and any relationships where is the `source_ref` or `target_ref` are also removed so that a user does not see them.

Design decision: This is done after extractions to save tokens with AI providers (otherwise might be easily passing 10000+ more tokens to the AI).

Note, whitelists are designed to be simplistic in txt2stix. If you want more advanced removal of potential benign extractions you should use another tool, like a Threat Intelligence Platform.

## Usage

### Setup
Expand All @@ -128,20 +50,6 @@ cp .env.sample .env

You can new set the correct values in `.env`.

A quick note on OPEN_AI and ARANGODB variables....

`OPENAI_*` properties are required should you want to use AI based extractions or AI relationship mode. If left blank, you can use pattern extractions and standard relationship modes only.

`ARANGODB_*` properties are required if you want to use MITRE ATT&CK, MITRE CWE, MITRE CAPEC, NVD CPE, or NVD CVE extractions. You must define an ArangoDB instance with the required data in the expected format in order for these extraction types to work.

You can populate your own instance of ArangoDB with the required data by using the scripts referenced in the [stix2arango](https://github.com/muchdogesec/stix2arango) quickstart.

**Make life simpler for yourself...**

If you do not want to backfill, maintain, or support your own ArangoDB STIX objects check out CTI Butler which provides a fully manage database of these objects you can use with txt2stix.

https://www.ctibutler.com/

### Usage

```shell
Expand Down Expand Up @@ -187,7 +95,7 @@ Currently it is not possible to easily add any other types of extractions (witho

## Detailed documentation

If you would like to understand how txt2stix works in more detail, please refer to the documentation in `/doc`.
If you would like to understand how txt2stix works in more detail, please refer to the documentation in `/docs/README.md`.

This documentation is paticularly helpful to read for those of you wanting to add your own custom extractions.

Expand Down
Loading

0 comments on commit 63ff37c

Please sign in to comment.