Skip to content

Commit

Permalink
making variables more clear (#8)
Browse files Browse the repository at this point in the history
  • Loading branch information
himynamesdave authored Nov 5, 2024
1 parent 2a2cd68 commit 5b0782c
Show file tree
Hide file tree
Showing 7 changed files with 144 additions and 66 deletions.
56 changes: 27 additions & 29 deletions .env.example
Original file line number Diff line number Diff line change
@@ -1,43 +1,41 @@
#django settings
DJANGO_SECRET=insecure_django_secret
DJANGO_DEBUG=True
DJANGO_ALLOWED_HOSTS=#MODIFY IT RUNNING ON NON LOCAL SERVERS
DJANGO_CORS_ALLOW_ALL_ORIGINS=True #MODIFY TO FALSE IF RUNNING IN PROD
DJANGO_CORS_ALLOWED_ORIGINS=#MODIFY IF RUNNING IN PROD
DJANGO_SECRET=
DJANGO_DEBUG=
DJANGO_ALLOWED_HOSTS=
DJANGO_CORS_ALLOW_ALL_ORIGINS=
DJANGO_CORS_ALLOWED_ORIGINS=
#postgres settings
POSTGRES_HOST=
POSTGRES_PORT=
POSTGRES_DB=postgres
POSTGRES_USER=postgres
POSTGRES_PASSWORD=postgres
POSTGRES_DB=
POSTGRES_USER=
POSTGRES_PASSWORD=
#celery settings
CELERY_BROKER_CONNECTION_RETRY_ON_STARTUP=1
CELERY_BROKER_CONNECTION_RETRY_ON_STARTUP=
# obstracts settings
MAX_PAGE_SIZE=50 # max size of api response payload
DEFAULT_PAGE_SIZE=50 # default size of api response payload
MAX_PAGE_SIZE=
DEFAULT_PAGE_SIZE=
# stix2arango settings
ARANGODB_HOST_URL='http://host.docker.internal:8529'
ARANGODB_USERNAME=root
ARANGODB_HOST_URL=
ARANGODB_USERNAME=
ARANGODB_PASSWORD=
# history4feed settings
HISTORY4FEED_URL='http://host.docker.internal:8002/'
HISTORY4FEED_URL=
# txt2stix settings
BIN_LIST_API_KEY= #[OPTIONAL -- for enriching credit card extractions] needed for extracting credit card information
OPENAI_API_KEY= # [REQUIRED IF USING AI MODES] needed if using AI relationship mode or AI extractions
OPENAI_MODEL=gpt-4 # [REQUIRED IF USING AI MODES] choose an OpenAI model of your choice. Ensure the input/output token count meets requirements (and adjust INPUT_TOKEN_LIMIT accordingly). List of models here: https://platform.openai.com/docs/models
INPUT_TOKEN_LIMIT=50
BIN_LIST_API_KEY=
OPENAI_API_KEY=
OPENAI_MODEL=
INPUT_TOKEN_LIMIT=
## CTIBUTLER FOR ATT&CK, CAPEC, CWE, ATLAS, AND LOCATION LOOKUPS
CTIBUTLER_HOST='http://host.docker.internal:8006'
CTIBUTLER_APIKEY= #[OPTIONAL] if using https://app.ctibutler.com
CTIBUTLER_HOST=
## VULMATCH FOR CVE AND CPE LOOKUPS
VULMATCH_HOST='http://host.docker.internal:8005'
VULMATCH_APIKEY= #[OPTIONAL] if using https://app.vulmatch.com
VULMATCH_HOST=
# file2txt settings
GOOGLE_VISION_API_KEY= # [REQUIRED -- to extract text from blog images]
GOOGLE_VISION_API_KEY=
# R2 storage configuration
USE_S3_STORAGE=0 # Need to set to 1 to enable
R2_ENDPOINT_URL= # Looks like https://ID.r2.cloudflarestorage.com
R2_BUCKET_NAME= # the bucket name
R2_ACCESS_KEY= # generated when creating an R2 API token. Make sure has read+write to R2_BUCKET_NAME specified
R2_SECRET_KEY= # generated when creating an R2 API token
R2_CUSTOM_DOMAIN= # this value is optional, but if you don't set your bucket to public, your images will hit 403s as they will hit the raw endpoint (e.g. https://ID.r2.cloudflarestorage.com/BUCKET/IMAGE/PATH.jpg) which will be inaccessible. The easiest way to do this is to enable R2.dev subdomain for the bucket. Looks like pub-ID.r2.dev . Do not include the https:// part
USE_S3_STORAGE=
R2_ENDPOINT_URL=
R2_BUCKET_NAME=
R2_ACCESS_KEY=
R2_SECRET_KEY=
R2_CUSTOM_DOMAIN=
116 changes: 116 additions & 0 deletions .env.markdown
Original file line number Diff line number Diff line change
@@ -0,0 +1,116 @@
# Environmental file info

If you're running in production, you should set these securely.

However, if you just want to experiment, set the following values

## Django Settings

These are all Django settings, defined in `obstracts/settings.py`

* `DJANGO_SECRET`: `insecure_django_secret`
* `DJANGO_DEBUG`: `True`
* `DJANGO_ALLOWED_HOSTS`: BLANK
* `DJANGO_CORS_ALLOW_ALL_ORIGINS`: `True`
* `DJANGO_CORS_ALLOWED_ORIGINS`: LEAVE EMPTY

## Postgres Settings

These are all Django settings, defined in `obstracts/settings.py`

* `POSTGRES_HOST`: `localhost`
* `POSTGRES_PORT`: BLANK
* `POSTGRES_DB`: `postgres`
* `POSTGRES_USER`: `postgres`
* `POSTGRES_PASSWORD`: `postgres`

## Celery settings

* `CELERY_BROKER_CONNECTION_RETRY_ON_STARTUP`: `1`

## Obstracts API settings

These define how the API behaves.

* `MAX_PAGE_SIZE`: `50`
* This is the maximum number of results the API will ever return before pagination
* `DEFAULT_PAGE_SIZE`: `50`
* The default page size of result returned by the API

## ArangoDB settings

Note, this code will not install an ArangoDB instance.

If you're new to ArangoDB, [you can install the community edition quickly by following the instructions here](https://arangodb.com/community-server/).

The script will automatically create a database called `obstracts_database` when the container is spun up (if it does not exist).

For each blog added, two new collections will be created in the format

`<FEED_NAME>_<FEED_ID>-<COLLECTION_TYPE>_collection`

e.g.

* `graham_cluley_9288374-0298740-94875-vertex_collection`
* `graham_cluley_9288374-0298740-94875-edge_collection`


* `ARANGODB_HOST_URL`: `'http://host.docker.internal:8529'`
* If you are running ArangoDB locally, be sure to set `ARANGODB_HOST_URL='http://host.docker.internal:8529'` in the `.env` file otherwise you will run into networking errors.
* `ARANGODB_USERNAME`: `root`
* Change this if neeed
* `ARANGODB_PASSWORD`: USE PASSWORD OF ARANGODB_USERNAME

## history4feed settings

Obstracts requires [history4feed](https://github.com/muchdogesec/history4feed) to download and store blog posts.

* `HISTORY4FEED_URL`: `'http://host.docker.internal:8002/'`
* If you are running history4feed locally, be sure to set `'http://host.docker.internal:8002/'` in the `.env` file otherwise you will run into networking errors.

## txt2stix settings

* `BIN_LIST_API_KEY`: BLANK
* for enriching credit card extractions] needed for extracting credit card information
* `OPENAI_API_KEY`: YOUR_API_KEY
* (REQUIRED IF USING AI MODES) get it from https://platform.openai.com/api-keys
* `OPENAI_MODEL`: `gpt-4`
* (REQUIRED IF USING AI MODES) List of models here: https://platform.openai.com/docs/models
* `INPUT_TOKEN_LIMIT`: 15000
* (REQUIRED IF USING AI MODES) Ensure the input/output token count meets requirements and is supported by the model selected

## CTIBUTLER

Obstracts requires [ctibutler](https://github.com/muchdogesec/ctibutler) to lookup ATT&CK, CAPEC, CWE, ATLAS, and locations in blogs

* `CTIBUTLER_HOST`: `'http://host.docker.internal:8006'`
* If you are running CTI Butler locally, be sure to set `'http://host.docker.internal:8006'` in the `.env` file otherwise you will run into networking errors.

## VULMATCH FOR CVE AND CPE LOOKUPS

Obstracts requires [vulmatch](https://github.com/muchdogesec/vulmatch) to lookup CVEs and CPEs in blogs

* `VULMATCH_HOST`: `'http://host.docker.internal:8005'`
* If you are running vulmatch locally, be sure to set `'http://host.docker.internal:8005'` in the `.env` file otherwise you will run into networking errors.

## file2txt settings

* `GOOGLE_VISION_API_KEY`: YOUR_API_KEY
* Instructions here: https://github.com/muchdogesec/file2txt?tab=readme-ov-file#optional-add-googles-cloud-vision-api-key

## R2 storage configuration

You can choose to store static assets on Cloudflare on R2. Default is local.

* `USE_S3_STORAGE`: `0`
* Set to `1` to enable
* `R2_ENDPOINT_URL`: BLANK
* Will be something like `https://ID.r2.cloudflarestorage.com`
* `R2_BUCKET_NAME`: BLANK
* The bucket name you want to use.
* `R2_ACCESS_KEY`: BLANK
* generated when creating an R2 API token. Make sure has read+write to R2_`BUCKET_NAME` specified
* `R2_SECRET_KEY`: BLANK
* generated when creating an R2 API token
* `R2_CUSTOM_DOMAIN`: BLANK
* this value is optional when using R2, but if you don't set your bucket to public, your images will hit 403s as they will hit the raw endpoint (e.g. https://ID.r2.cloudflarestorage.com/BUCKET/IMAGE/PATH.jpg) which will be inaccessible. The easiest way to do this is to enable R2.dev subdomain for the bucket. Looks like `pub-ID.r2.dev` . Do not include the `https://` part
3 changes: 0 additions & 3 deletions .env.obstracts-web
Original file line number Diff line number Diff line change
Expand Up @@ -16,13 +16,10 @@ OPENAI_MODEL=
INPUT_TOKEN_LIMIT=

# CTIBUTLER FOR ATT&CK, CAPEC, CWE, ATLAS, AND LOCATION LOOKUPS

CTIBUTLER_HOST=
CTIBUTLER_APIKEY=

# VULMATCH FOR CVE AND CPE LOOKUPS
VULMATCH_HOST=
VULMATCH_APIKEY=

# file2txt settings
GOOGLE_VISION_API_KEY=
Expand Down
2 changes: 0 additions & 2 deletions .github/workflows/deploy-image-production.yml
Original file line number Diff line number Diff line change
Expand Up @@ -55,9 +55,7 @@ jobs:
OPENAI_MODEL=${{ secrets.OPENAI_MODEL }}
INPUT_TOKEN_LIMIT=${{ secrets.INPUT_TOKEN_LIMIT }}
CTIBUTLER_HOST=${{ secrets.CTIBUTLER_HOST }}
CTIBUTLER_APIKEY=${{ secrets.CTIBUTLER_APIKEY }}
VULMATCH_HOST=${{ secrets.VULMATCH_HOST }}
VULMATCH_APIKEY=${{ secrets.VULMATCH_APIKEY }}
GOOGLE_VISION_API_KEY=${{ secrets.GOOGLE_VISION_API_KEY }}
USE_S3_STORAGE=${{ secrets.USE_S3_STORAGE }}1
R2_ENDPOINT_URL=${{ secrets.R2_ENDPOINT_URL }}
Expand Down
2 changes: 0 additions & 2 deletions .github/workflows/deploy-image-staging.yml
Original file line number Diff line number Diff line change
Expand Up @@ -55,9 +55,7 @@ jobs:
OPENAI_MODEL=${{ secrets.OPENAI_MODEL }}
INPUT_TOKEN_LIMIT=${{ secrets.INPUT_TOKEN_LIMIT }}
CTIBUTLER_HOST=${{ secrets.CTIBUTLER_HOST }}
CTIBUTLER_APIKEY=${{ secrets.CTIBUTLER_APIKEY }}
VULMATCH_HOST=${{ secrets.VULMATCH_HOST }}
VULMATCH_APIKEY=${{ secrets.VULMATCH_APIKEY }}
GOOGLE_VISION_API_KEY=${{ secrets.GOOGLE_VISION_API_KEY }}
USE_S3_STORAGE=${{ secrets.USE_S3_STORAGE }}1
R2_ENDPOINT_URL=${{ secrets.R2_ENDPOINT_URL }}
Expand Down
4 changes: 0 additions & 4 deletions Dockerfile.deploy
Original file line number Diff line number Diff line change
Expand Up @@ -11,9 +11,7 @@ ARG OPENAI_API_KEY=
ARG OPENAI_MODEL=
ARG INPUT_TOKEN_LIMIT=
ARG CTIBUTLER_HOST=
ARG CTIBUTLER_APIKEY=
ARG VULMATCH_HOST=
ARG VULMATCH_APIKEY=
ARG GOOGLE_VISION_API_KEY=
ARG USE_S3_STORAGE=
ARG R2_ENDPOINT_URL=
Expand All @@ -32,9 +30,7 @@ ENV OPENAI_API_KEY=${OPENAI_API_KEY}
ENV OPENAI_MODEL=${OPENAI_MODEL}
ENV INPUT_TOKEN_LIMIT=${INPUT_TOKEN_LIMIT}
ENV CTIBUTLER_HOST=${CTIBUTLER_HOST}
ENV CTIBUTLER_APIKEY=${CTIBUTLER_APIKEY}
ENV VULMATCH_HOST=${VULMATCH_HOST}
ENV VULMATCH_APIKEY=${VULMATCH_APIKEY}
ENV GOOGLE_VISION_API_KEY=${GOOGLE_VISION_API_KEY}
ENV USE_S3_STORAGE=${USE_S3_STORAGE}
ENV R2_ENDPOINT_URL=${R2_ENDPOINT_URL}
Expand Down
27 changes: 1 addition & 26 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -32,14 +32,6 @@ It works at a high level like so:

## Install

### Download and run history4feed

Obstracts requires [history4feed](https://github.com/muchdogesec/history4feed) to download and store blog posts.

You'll need to set the location of history4feed later in the Obstracts `.env` file.

If you are running history4feed locally, be sure to set `HISTORY4FEED_URL='http://host.docker.internal:8002/'` in the `.env` file otherwise you will run into networking errors.

### Download and configure

```shell
Expand All @@ -57,24 +49,7 @@ To create one using the default settings:
cp .env.example .env
```

#### A note on ArangoDB secrets

Note, this script will not install an ArangoDB instance.

If you're new to ArangoDB, [you can install the community edition quickly by following the instructions here](https://arangodb.com/community-server/).

If you are running ArangoDB locally, be sure to set `ARANGODB_HOST_URL='http://host.docker.internal:8529'` in the `.env` file otherwise you will run into networking errors.

The script will automatically create a database called `obstracts_database` when the container is spun up (if it does not exist).

For each blog added, two new collections will be created in the format

`<FEED_NAME>_<FEED_ID>-<COLLECTION_TYPE>_collection`

e.g.

* `graham_cluley_9288374-0298740-94875-vertex_collection`
* `graham_cluley_9288374-0298740-94875-edge_collection`
To see more information about how to set the variables, and what they do, read the `.env.markdown` file.

#### A note on Django and Postgres secrets

Expand Down

0 comments on commit 5b0782c

Please sign in to comment.