Skip to content

Commit

Permalink
Merge remote-tracking branch 'origin/main' into security-upgrade
Browse files Browse the repository at this point in the history
  • Loading branch information
peternied committed Mar 15, 2024
2 parents a779ce3 + c5d01a7 commit d1ad07e
Show file tree
Hide file tree
Showing 8 changed files with 44 additions and 13 deletions.
2 changes: 1 addition & 1 deletion _clients/OpenSearch-dot-net.md
Original file line number Diff line number Diff line change
Expand Up @@ -400,7 +400,7 @@ internal class Program
FirstName = "Paulo",
LastName = "Santos",
Gpa = 3.93,
GradYear = 2021 };v
GradYear = 2021 };
var response = client.Index<StringResponse>("students", "100",
PostData.Serializable(student));
Console.WriteLine(response.Body);
Expand Down
3 changes: 3 additions & 0 deletions _clients/index.md
Original file line number Diff line number Diff line change
Expand Up @@ -35,6 +35,9 @@ OpenSearch provides clients for the following programming languages and platform
* [OpenSearch .NET clients]({{site.url}}{{site.baseurl}}/clients/dot-net/)
* **Rust**
* [OpenSearch Rust client]({{site.url}}{{site.baseurl}}/clients/rust/)
* **Hadoop**
* [OpenSearch Hadoop client](https://github.com/opensearch-project/opensearch-hadoop)


For a client compatibility matrix, see the COMPATIBILITY.md file in the client's repository.
{: .note}
Expand Down
2 changes: 1 addition & 1 deletion _install-and-configure/install-dashboards/plugins.md
Original file line number Diff line number Diff line change
Expand Up @@ -9,7 +9,7 @@ redirect_from:

# Managing OpenSearch Dashboards plugins

OpenSearch Dashboards provides a command line tool called `opensearch-plugin` for managing plugins. This tool allows you to:
OpenSearch Dashboards provides a command line tool called `opensearch-dashboards-plugin` for managing plugins. This tool allows you to:

- List installed plugins.
- Install plugins.
Expand Down
3 changes: 2 additions & 1 deletion _install-and-configure/install-opensearch/index.md
Original file line number Diff line number Diff line change
Expand Up @@ -117,4 +117,5 @@ Property | Description
`opensearch.xcontent.string.length.max=<value>` | By default, OpenSearch does not impose any limits on the maximum length of the JSON/YAML/CBOR/Smile string fields. To protect your cluster against potential distributed denial-of-service (DDoS) or memory issues, you can set the `opensearch.xcontent.string.length.max` system property to a reasonable limit (the maximum is 2,147,483,647), for example, `-Dopensearch.xcontent.string.length.max=5000000`. |
`opensearch.xcontent.fast_double_writer=[true|false]` | By default, OpenSearch serializes floating-point numbers using the default implementation provided by the Java Runtime Environment. Set this value to `true` to use the Schubfach algorithm, which is faster but may lead to small differences in precision. Default is `false`. |
`opensearch.xcontent.name.length.max=<value>` | By default, OpenSearch does not impose any limits on the maximum length of the JSON/YAML/CBOR/Smile field names. To protect your cluster against potential DDoS or memory issues, you can set the `opensearch.xcontent.name.length.max` system property to a reasonable limit (the maximum is 2,147,483,647), for example, `-Dopensearch.xcontent.name.length.max=50000`. |
`opensearch.xcontent.depth.max=<value>` | By default, OpenSearch does not impose any limits on the maximum nesting depth for JSON/YAML/CBOR/Smile documents. To protect your cluster against potential DDoS or memory issues, you can set the `opensearch.xcontent.depth.max` system property to a reasonable limit (the maximum is 2,147,483,647), for example, `-Dopensearch.xcontent.name.length.max=1000`. |
`opensearch.xcontent.depth.max=<value>` | By default, OpenSearch does not impose any limits on the maximum nesting depth for JSON/YAML/CBOR/Smile documents. To protect your cluster against potential DDoS or memory issues, you can set the `opensearch.xcontent.depth.max` system property to a reasonable limit (the maximum is 2,147,483,647), for example, `-Dopensearch.xcontent.name.length.max=1000`. |
`opensearch.xcontent.codepoint.max=<value>` | By default, OpenSearch imposes a limit of `52428800` on the maximum size of the YAML documents (in code points). To protect your cluster against potential DDoS or memory issues, you can change the `opensearch.xcontent.codepoint.max` system property to a reasonable limit (the maximum is 2,147,483,647). For example, `-Dopensearch.xcontent.codepoint.max=5000000`. |
18 changes: 10 additions & 8 deletions _ml-commons-plugin/remote-models/connectors.md
Original file line number Diff line number Diff line change
Expand Up @@ -214,20 +214,21 @@ The `parameters` section requires the following options when using `aws_sigv4` a

### Cohere connector

You can use the following example request to create a standalone Cohere connector:
You can use the following example request to create a standalone Cohere connector using the Embed V3 model. For more information, see [Cohere connector blueprint](https://github.com/opensearch-project/ml-commons/blob/2.x/docs/remote_inference_blueprints/cohere_connector_embedding_blueprint).

```json
POST /_plugins/_ml/connectors/_create
{
"name": "<YOUR CONNECTOR NAME>",
"description": "<YOUR CONNECTOR DESCRIPTION>",
"version": "<YOUR CONNECTOR VERSION>",
"name": "Cohere Embed Model",
"description": "The connector to Cohere's public embed API",
"version": "1",
"protocol": "http",
"credential": {
"cohere_key": "<YOUR COHERE API KEY HERE>"
"cohere_key": "<ENTER_COHERE_API_KEY_HERE>"
},
"parameters": {
"model": "embed-english-v2.0",
"model": "embed-english-v3.0",
"input_type":"search_document",
"truncate": "END"
},
"actions": [
Expand All @@ -236,9 +237,10 @@ POST /_plugins/_ml/connectors/_create
"method": "POST",
"url": "https://api.cohere.ai/v1/embed",
"headers": {
"Authorization": "Bearer ${credential.cohere_key}"
"Authorization": "Bearer ${credential.cohere_key}",
"Request-Source": "unspecified:opensearch"
},
"request_body": "{ \"texts\": ${parameters.texts}, \"truncate\": \"${parameters.truncate}\", \"model\": \"${parameters.model}\" }",
"request_body": "{ \"texts\": ${parameters.texts}, \"truncate\": \"${parameters.truncate}\", \"model\": \"${parameters.model}\", \"input_type\": \"${parameters.input_type}\" }",
"pre_process_function": "connector.pre_process.cohere.embedding",
"post_process_function": "connector.post_process.cohere.embedding"
}
Expand Down
7 changes: 7 additions & 0 deletions _security/access-control/api.md
Original file line number Diff line number Diff line change
Expand Up @@ -29,6 +29,13 @@ plugins.security.restapi.roles_enabled: ["<role>", ...]
```
{% include copy.html %}
If you're working with APIs that manage `Distinguished names` or `Certificates` that require super admin access, enable the REST API admin configuration in your `opensearch.yml` file as shown in the following setting example:

```yml
plugins.security.restapi.admin.enabled: true
```
{% include copy.html %}

These roles can now access all APIs. To prevent access to certain APIs:

```yml
Expand Down
4 changes: 4 additions & 0 deletions _security/authentication-backends/jwt.md
Original file line number Diff line number Diff line change
Expand Up @@ -106,6 +106,8 @@ jwt_auth_domain:
jwt_url_parameter: null
subject_key: null
roles_key: null
required_audience: null
required_issuer: null
jwt_clock_skew_tolerance_seconds: 20
authentication_backend:
type: noop
Expand All @@ -120,6 +122,8 @@ Name | Description
`jwt_url_parameter` | If the token is not transmitted in the HTTP header but rather as an URL parameter, define the name of the parameter here.
`subject_key` | The key in the JSON payload that stores the username. If not set, the [subject](https://tools.ietf.org/html/rfc7519#section-4.1.2) registered claim is used.
`roles_key` | The key in the JSON payload that stores the user's roles. The value of this key must be a comma-separated list of roles.
`required_audience` | The name of the audience which the JWT must specify. This corresponds [`aud` claim of the JWT](https://datatracker.ietf.org/doc/html/rfc7519#section-4.1.3).
`required_issuer` | The target issuer of JWT stored in the JSON payload. This corresponds to the [`iss` claim of the JWT](https://datatracker.ietf.org/doc/html/rfc7519#section-4.1.1).
`jwt_clock_skew_tolerance_seconds` | Sets a window of time, in seconds, to compensate for any disparity between the JWT authentication server and OpenSearch node clock times, thereby preventing authentication failures due to the misalignment. Security sets 30 seconds as the default. Use this setting to apply a custom value.

Because JWTs are self-contained and the user is authenticated at the HTTP level, no additional `authentication_backend` is needed. Set this value to `noop`.
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -10,7 +10,7 @@ redirect_from:

# Search backpressure

Search backpressure is a mechanism used to identify resource-intensive search requests and cancel them when the node is under duress. If a search request on a node or shard has breached the resource limits and does not recover within a certain threshold, it is rejected. These thresholds are dynamic and configurable through [cluster settings](#search-backpressure-settings).
Search backpressure is a mechanism used to identify resource-intensive search requests and cancel them when the node is under duress. If a search request on a node or shard has breached the resource limits and does not recover within a certain threshold, it is rejected. These thresholds are dynamic and configurable through [cluster settings](#search-backpressure-settings) using the `/_cluster/settings` API endpoint.

## Measuring resource consumption

Expand Down Expand Up @@ -78,6 +78,20 @@ Search backpressure runs in `monitor_only` (default), `enforced`, or `disabled`

Search backpressure adds several settings to the standard OpenSearch cluster settings. These settings are dynamic, so you can change the default behavior of this feature without restarting your cluster.

To configure these settings, send a PUT request to `/_cluster/settings`:

```json
PUT /_cluster/settings
{
"persistent": {
"search_backpressure": {
"mode": "monitor_only"
}
}
}
```
{% include copy-curl.html %}

Setting | Default | Description
:--- | :--- | :---
search_backpressure.mode | `monitor_only` | The search backpressure [mode](#search-backpressure-modes). Valid values are `monitor_only`, `enforced`, or `disabled`.
Expand Down Expand Up @@ -259,4 +273,4 @@ The `cancellation_stats` object contains the following statistics for the tasks
Field Name | Data type | Description
:--- | :--- | :---
cancellation_count | Integer | The total number of tasks marked for cancellation since the node last restarted.
cancellation_limit_reached_count | Integer | The number of times when the number of tasks eligible for cancellation exceeded the set cancellation threshold.
cancellation_limit_reached_count | Integer | The number of times when the number of tasks eligible for cancellation exceeded the set cancellation threshold.

0 comments on commit d1ad07e

Please sign in to comment.