Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

fix(deps): update dependency openai to v4.77.0 #228

Merged
merged 1 commit into from
Jan 6, 2025

Conversation

renovate[bot]
Copy link
Contributor

@renovate renovate bot commented Jan 1, 2025

This PR contains the following updates:

Package Change Age Adoption Passing Confidence
openai 4.76.3 -> 4.77.0 age adoption passing confidence

Release Notes

openai/openai-node (openai)

v4.77.0

Compare Source

Full Changelog: v4.76.3...v4.77.0

Features
  • api: new o1 and GPT-4o models + preference fine-tuning (#​1229) (2e872d4)
Chores

Configuration

📅 Schedule: Branch creation - "* 0-4 * * 3" (UTC), Automerge - At any time (no schedule defined).

🚦 Automerge: Disabled by config. Please merge this manually once you are satisfied.

Rebasing: Whenever PR becomes conflicted, or you tick the rebase/retry checkbox.

🔕 Ignore: Close this PR and you won't be reminded about this update again.


  • If you want to rebase/retry this PR, check this box

This PR was generated by Mend Renovate. View the repository job log.

Copy link

github-actions bot commented Jan 1, 2025

openai debug - [puLL-Merge] - openai/[email protected]

Diff
diff --git .release-please-manifest.json .release-please-manifest.json
index 52c31fe71..6b843f931 100644
--- .release-please-manifest.json
+++ .release-please-manifest.json
@@ -1,3 +1,3 @@
 {
-  ".": "4.76.3"
+  ".": "4.77.0"
 }
diff --git .stats.yml .stats.yml
index 3cc042fe0..7b5235e3c 100644
--- .stats.yml
+++ .stats.yml
@@ -1,2 +1,2 @@
 configured_endpoints: 68
-openapi_spec_url: https://storage.googleapis.com/stainless-sdk-openapi-specs/openai-2e0e0678be19d1118fd796af291822075e40538dba326611e177e9f3dc245a53.yml
+openapi_spec_url: https://storage.googleapis.com/stainless-sdk-openapi-specs/openai-0d64ca9e45f51b4279f87b205eeb3a3576df98407698ce053f2e2302c1c08df1.yml
diff --git CHANGELOG.md CHANGELOG.md
index 4b6f57fe4..d33ce4c1a 100644
--- CHANGELOG.md
+++ CHANGELOG.md
@@ -1,5 +1,19 @@
 # Changelog
 
+## 4.77.0 (2024-12-17)
+
+Full Changelog: [v4.76.3...v4.77.0](https://github.com/openai/openai-node/compare/v4.76.3...v4.77.0)
+
+### Features
+
+* **api:** new o1 and GPT-4o models + preference fine-tuning ([#1229](https://github.com/openai/openai-node/issues/1229)) ([2e872d4](https://github.com/openai/openai-node/commit/2e872d4ac3717ab8f61741efffb7a31acd798338))
+
+
+### Chores
+
+* **internal:** fix some typos ([#1227](https://github.com/openai/openai-node/issues/1227)) ([d51fcfe](https://github.com/openai/openai-node/commit/d51fcfe3a66550a684eeeb0e6f17e1d9825cdf78))
+* **internal:** spec update ([#1230](https://github.com/openai/openai-node/issues/1230)) ([ed2b61d](https://github.com/openai/openai-node/commit/ed2b61d32703b64d9f91223bc02627a607f60483))
+
 ## 4.76.3 (2024-12-13)
 
 Full Changelog: [v4.76.2...v4.76.3](https://github.com/openai/openai-node/compare/v4.76.2...v4.76.3)
diff --git api.md api.md
index 465730de8..54bcf08d7 100644
--- api.md
+++ api.md
@@ -41,6 +41,7 @@ Types:
 - <code><a href="./src/resources/chat/completions.ts">ChatCompletionContentPartInputAudio</a></code>
 - <code><a href="./src/resources/chat/completions.ts">ChatCompletionContentPartRefusal</a></code>
 - <code><a href="./src/resources/chat/completions.ts">ChatCompletionContentPartText</a></code>
+- <code><a href="./src/resources/chat/completions.ts">ChatCompletionDeveloperMessageParam</a></code>
 - <code><a href="./src/resources/chat/completions.ts">ChatCompletionFunctionCallOption</a></code>
 - <code><a href="./src/resources/chat/completions.ts">ChatCompletionFunctionMessageParam</a></code>
 - <code><a href="./src/resources/chat/completions.ts">ChatCompletionMessage</a></code>
@@ -49,6 +50,7 @@ Types:
 - <code><a href="./src/resources/chat/completions.ts">ChatCompletionModality</a></code>
 - <code><a href="./src/resources/chat/completions.ts">ChatCompletionNamedToolChoice</a></code>
 - <code><a href="./src/resources/chat/completions.ts">ChatCompletionPredictionContent</a></code>
+- <code><a href="./src/resources/chat/completions.ts">ChatCompletionReasoningEffort</a></code>
 - <code><a href="./src/resources/chat/completions.ts">ChatCompletionRole</a></code>
 - <code><a href="./src/resources/chat/completions.ts">ChatCompletionStreamOptions</a></code>
 - <code><a href="./src/resources/chat/completions.ts">ChatCompletionSystemMessageParam</a></code>
diff --git jsr.json jsr.json
index ef9ce6848..d76a2040e 100644
--- jsr.json
+++ jsr.json
@@ -1,6 +1,6 @@
 {
   "name": "@openai/openai",
-  "version": "4.76.3",
+  "version": "4.77.0",
   "exports": "./index.ts",
   "publish": {
     "exclude": [
diff --git package.json package.json
index 47f363ba1..54633aa5d 100644
--- package.json
+++ package.json
@@ -1,6 +1,6 @@
 {
   "name": "openai",
-  "version": "4.76.3",
+  "version": "4.77.0",
   "description": "The official TypeScript library for the OpenAI API",
   "author": "OpenAI <[email protected]>",
   "types": "dist/index.d.ts",
diff --git src/core.ts src/core.ts
index e1a93f272..68f1e676a 100644
--- src/core.ts
+++ src/core.ts
@@ -198,7 +198,7 @@ export abstract class APIClient {
     maxRetries = 2,
     timeout = 600000, // 10 minutes
     httpAgent,
-    fetch: overridenFetch,
+    fetch: overriddenFetch,
   }: {
     baseURL: string;
     maxRetries?: number | undefined;
@@ -211,7 +211,7 @@ export abstract class APIClient {
     this.timeout = validatePositiveInteger('timeout', timeout);
     this.httpAgent = httpAgent;
 
-    this.fetch = overridenFetch ?? fetch;
+    this.fetch = overriddenFetch ?? fetch;
   }
 
   protected authHeaders(opts: FinalRequestOptions): Headers {
diff --git src/index.ts src/index.ts
index 58d7410e4..2320850fb 100644
--- src/index.ts
+++ src/index.ts
@@ -80,6 +80,7 @@ import {
   ChatCompletionCreateParams,
   ChatCompletionCreateParamsNonStreaming,
   ChatCompletionCreateParamsStreaming,
+  ChatCompletionDeveloperMessageParam,
   ChatCompletionFunctionCallOption,
   ChatCompletionFunctionMessageParam,
   ChatCompletionMessage,
@@ -88,6 +89,7 @@ import {
   ChatCompletionModality,
   ChatCompletionNamedToolChoice,
   ChatCompletionPredictionContent,
+  ChatCompletionReasoningEffort,
   ChatCompletionRole,
   ChatCompletionStreamOptions,
   ChatCompletionSystemMessageParam,
@@ -353,6 +355,7 @@ export declare namespace OpenAI {
     type ChatCompletionContentPartInputAudio as ChatCompletionContentPartInputAudio,
     type ChatCompletionContentPartRefusal as ChatCompletionContentPartRefusal,
     type ChatCompletionContentPartText as ChatCompletionContentPartText,
+    type ChatCompletionDeveloperMessageParam as ChatCompletionDeveloperMessageParam,
     type ChatCompletionFunctionCallOption as ChatCompletionFunctionCallOption,
     type ChatCompletionFunctionMessageParam as ChatCompletionFunctionMessageParam,
     type ChatCompletionMessage as ChatCompletionMessage,
@@ -361,6 +364,7 @@ export declare namespace OpenAI {
     type ChatCompletionModality as ChatCompletionModality,
     type ChatCompletionNamedToolChoice as ChatCompletionNamedToolChoice,
     type ChatCompletionPredictionContent as ChatCompletionPredictionContent,
+    type ChatCompletionReasoningEffort as ChatCompletionReasoningEffort,
     type ChatCompletionRole as ChatCompletionRole,
     type ChatCompletionStreamOptions as ChatCompletionStreamOptions,
     type ChatCompletionSystemMessageParam as ChatCompletionSystemMessageParam,
diff --git src/resources/chat/chat.ts src/resources/chat/chat.ts
index 09cd3d123..2230b19bd 100644
--- src/resources/chat/chat.ts
+++ src/resources/chat/chat.ts
@@ -16,6 +16,7 @@ import {
   ChatCompletionCreateParams,
   ChatCompletionCreateParamsNonStreaming,
   ChatCompletionCreateParamsStreaming,
+  ChatCompletionDeveloperMessageParam,
   ChatCompletionFunctionCallOption,
   ChatCompletionFunctionMessageParam,
   ChatCompletionMessage,
@@ -24,6 +25,7 @@ import {
   ChatCompletionModality,
   ChatCompletionNamedToolChoice,
   ChatCompletionPredictionContent,
+  ChatCompletionReasoningEffort,
   ChatCompletionRole,
   ChatCompletionStreamOptions,
   ChatCompletionSystemMessageParam,
@@ -44,6 +46,8 @@ export class Chat extends APIResource {
 }
 
 export type ChatModel =
+  | 'o1'
+  | 'o1-2024-12-17'
   | 'o1-preview'
   | 'o1-preview-2024-09-12'
   | 'o1-mini'
@@ -52,10 +56,11 @@ export type ChatModel =
   | 'gpt-4o-2024-11-20'
   | 'gpt-4o-2024-08-06'
   | 'gpt-4o-2024-05-13'
-  | 'gpt-4o-realtime-preview'
-  | 'gpt-4o-realtime-preview-2024-10-01'
   | 'gpt-4o-audio-preview'
   | 'gpt-4o-audio-preview-2024-10-01'
+  | 'gpt-4o-audio-preview-2024-12-17'
+  | 'gpt-4o-mini-audio-preview'
+  | 'gpt-4o-mini-audio-preview-2024-12-17'
   | 'chatgpt-4o-latest'
   | 'gpt-4o-mini'
   | 'gpt-4o-mini-2024-07-18'
@@ -96,6 +101,7 @@ export declare namespace Chat {
     type ChatCompletionContentPartInputAudio as ChatCompletionContentPartInputAudio,
     type ChatCompletionContentPartRefusal as ChatCompletionContentPartRefusal,
     type ChatCompletionContentPartText as ChatCompletionContentPartText,
+    type ChatCompletionDeveloperMessageParam as ChatCompletionDeveloperMessageParam,
     type ChatCompletionFunctionCallOption as ChatCompletionFunctionCallOption,
     type ChatCompletionFunctionMessageParam as ChatCompletionFunctionMessageParam,
     type ChatCompletionMessage as ChatCompletionMessage,
@@ -104,6 +110,7 @@ export declare namespace Chat {
     type ChatCompletionModality as ChatCompletionModality,
     type ChatCompletionNamedToolChoice as ChatCompletionNamedToolChoice,
     type ChatCompletionPredictionContent as ChatCompletionPredictionContent,
+    type ChatCompletionReasoningEffort as ChatCompletionReasoningEffort,
     type ChatCompletionRole as ChatCompletionRole,
     type ChatCompletionStreamOptions as ChatCompletionStreamOptions,
     type ChatCompletionSystemMessageParam as ChatCompletionSystemMessageParam,
diff --git src/resources/chat/completions.ts src/resources/chat/completions.ts
index 8e9a4385e..31f5814cb 100644
--- src/resources/chat/completions.ts
+++ src/resources/chat/completions.ts
@@ -15,6 +15,12 @@ export class Completions extends APIResource {
    * [text generation](https://platform.openai.com/docs/guides/text-generation),
    * [vision](https://platform.openai.com/docs/guides/vision), and
    * [audio](https://platform.openai.com/docs/guides/audio) guides.
+   *
+   * Parameter support can differ depending on the model used to generate the
+   * response, particularly for newer reasoning models. Parameters that are only
+   * supported for reasoning models are noted below. For the current state of
+   * unsupported parameters in reasoning models,
+   * [refer to the reasoning guide](https://platform.openai.com/docs/guides/reasoning).
    */
   create(
     body: ChatCompletionCreateParamsNonStreaming,
@@ -135,6 +141,9 @@ export namespace ChatCompletion {
   }
 }
 
+/**
+ * Messages sent by the model in response to user messages.
+ */
 export interface ChatCompletionAssistantMessageParam {
   /**
    * The role of the messages author, in this case `assistant`.
@@ -530,6 +539,29 @@ export interface ChatCompletionContentPartText {
   type: 'text';
 }
 
+/**
+ * Developer-provided instructions that the model should follow, regardless of
+ * messages sent by the user. With o1 models and newer, `developer` messages
+ * replace the previous `system` messages.
+ */
+export interface ChatCompletionDeveloperMessageParam {
+  /**
+   * The contents of the developer message.
+   */
+  content: string | Array<ChatCompletionContentPartText>;
+
+  /**
+   * The role of the messages author, in this case `developer`.
+   */
+  role: 'developer';
+
+  /**
+   * An optional name for the participant. Provides the model information to
+   * differentiate between participants of the same role.
+   */
+  name?: string;
+}
+
 /**
  * Specifying a particular function via `{"name": "my_function"}` forces the model
  * to call that function.
@@ -620,7 +652,13 @@ export namespace ChatCompletionMessage {
   }
 }
 
+/**
+ * Developer-provided instructions that the model should follow, regardless of
+ * messages sent by the user. With o1 models and newer, `developer` messages
+ * replace the previous `system` messages.
+ */
 export type ChatCompletionMessageParam =
+  | ChatCompletionDeveloperMessageParam
   | ChatCompletionSystemMessageParam
   | ChatCompletionUserMessageParam
   | ChatCompletionAssistantMessageParam
@@ -707,6 +745,16 @@ export interface ChatCompletionPredictionContent {
   type: 'content';
 }
 
+/**
+ * **o1 models only**
+ *
+ * Constrains effort on reasoning for
+ * [reasoning models](https://platform.openai.com/docs/guides/reasoning). Currently
+ * supported values are `low`, `medium`, and `high`. Reducing reasoning effort can
+ * result in faster responses and fewer tokens used on reasoning in a response.
+ */
+export type ChatCompletionReasoningEffort = 'low' | 'medium' | 'high';
+
 /**
  * The role of the author of a message
  */
@@ -725,6 +773,11 @@ export interface ChatCompletionStreamOptions {
   include_usage?: boolean;
 }
 
+/**
+ * Developer-provided instructions that the model should follow, regardless of
+ * messages sent by the user. With o1 models and newer, use `developer` messages
+ * for this purpose instead.
+ */
 export interface ChatCompletionSystemMessageParam {
   /**
    * The contents of the system message.
@@ -835,6 +888,10 @@ export interface ChatCompletionToolMessageParam {
   tool_call_id: string;
 }
 
+/**
+ * Messages sent by an end user, containing prompts or additional context
+ * information.
+ */
 export interface ChatCompletionUserMessageParam {
   /**
    * The contents of the user message.
@@ -891,20 +948,22 @@ export interface ChatCompletionCreateParamsBase {
    * Number between -2.0 and 2.0. Positive values penalize new tokens based on their
    * existing frequency in the text so far, decreasing the model's likelihood to
    * repeat the same line verbatim.
-   *
-   * [See more information about frequency and presence penalties.](https://platform.openai.com/docs/guides/text-generation)
    */
   frequency_penalty?: number | null;
 
   /**
    * Deprecated in favor of `tool_choice`.
    *
-   * Controls which (if any) function is called by the model. `none` means the model
-   * will not call a function and instead generates a message. `auto` means the model
-   * can pick between generating a message or calling a function. Specifying a
-   * particular function via `{"name": "my_function"}` forces the model to call that
+   * Controls which (if any) function is called by the model.
+   *
+   * `none` means the model will not call a function and instead generates a message.
+   *
+   * `auto` means the model can pick between generating a message or calling a
    * function.
    *
+   * Specifying a particular function via `{"name": "my_function"}` forces the model
+   * to call that function.
+   *
    * `none` is the default when no functions are present. `auto` is the default if
    * functions are present.
    */
@@ -998,17 +1057,21 @@ export interface ChatCompletionCreateParamsBase {
    * Number between -2.0 and 2.0. Positive values penalize new tokens based on
    * whether they appear in the text so far, increasing the model's likelihood to
    * talk about new topics.
-   *
-   * [See more information about frequency and presence penalties.](https://platform.openai.com/docs/guides/text-generation)
    */
   presence_penalty?: number | null;
 
   /**
-   * An object specifying the format that the model must output. Compatible with
-   * [GPT-4o](https://platform.openai.com/docs/models#gpt-4o),
-   * [GPT-4o mini](https://platform.openai.com/docs/models#gpt-4o-mini),
-   * [GPT-4 Turbo](https://platform.openai.com/docs/models#gpt-4-turbo-and-gpt-4) and
-   * all GPT-3.5 Turbo models newer than `gpt-3.5-turbo-1106`.
+   * **o1 models only**
+   *
+   * Constrains effort on reasoning for
+   * [reasoning models](https://platform.openai.com/docs/guides/reasoning). Currently
+   * supported values are `low`, `medium`, and `high`. Reducing reasoning effort can
+   * result in faster responses and fewer tokens used on reasoning in a response.
+   */
+  reasoning_effort?: ChatCompletionReasoningEffort;
+
+  /**
+   * An object specifying the format that the model must output.
    *
    * Setting to `{ "type": "json_schema", "json_schema": {...} }` enables Structured
    * Outputs which ensures the model will match your supplied JSON schema. Learn more
@@ -1088,9 +1151,8 @@ export interface ChatCompletionCreateParamsBase {
   /**
    * What sampling temperature to use, between 0 and 2. Higher values like 0.8 will
    * make the output more random, while lower values like 0.2 will make it more
-   * focused and deterministic.
-   *
-   * We generally recommend altering this or `top_p` but not both.
+   * focused and deterministic. We generally recommend altering this or `top_p` but
+   * not both.
    */
   temperature?: number | null;
 
@@ -1223,6 +1285,7 @@ export declare namespace Completions {
     type ChatCompletionContentPartInputAudio as ChatCompletionContentPartInputAudio,
     type ChatCompletionContentPartRefusal as ChatCompletionContentPartRefusal,
     type ChatCompletionContentPartText as ChatCompletionContentPartText,
+    type ChatCompletionDeveloperMessageParam as ChatCompletionDeveloperMessageParam,
     type ChatCompletionFunctionCallOption as ChatCompletionFunctionCallOption,
     type ChatCompletionFunctionMessageParam as ChatCompletionFunctionMessageParam,
     type ChatCompletionMessage as ChatCompletionMessage,
@@ -1231,6 +1294,7 @@ export declare namespace Completions {
     type ChatCompletionModality as ChatCompletionModality,
     type ChatCompletionNamedToolChoice as ChatCompletionNamedToolChoice,
     type ChatCompletionPredictionContent as ChatCompletionPredictionContent,
+    type ChatCompletionReasoningEffort as ChatCompletionReasoningEffort,
     type ChatCompletionRole as ChatCompletionRole,
     type ChatCompletionStreamOptions as ChatCompletionStreamOptions,
     type ChatCompletionSystemMessageParam as ChatCompletionSystemMessageParam,
diff --git src/resources/chat/index.ts src/resources/chat/index.ts
index 262bf75a2..c3be19402 100644
--- src/resources/chat/index.ts
+++ src/resources/chat/index.ts
@@ -13,6 +13,7 @@ export {
   type ChatCompletionContentPartInputAudio,
   type ChatCompletionContentPartRefusal,
   type ChatCompletionContentPartText,
+  type ChatCompletionDeveloperMessageParam,
   type ChatCompletionFunctionCallOption,
   type ChatCompletionFunctionMessageParam,
   type ChatCompletionMessage,
@@ -21,6 +22,7 @@ export {
   type ChatCompletionModality,
   type ChatCompletionNamedToolChoice,
   type ChatCompletionPredictionContent,
+  type ChatCompletionReasoningEffort,
   type ChatCompletionRole,
   type ChatCompletionStreamOptions,
   type ChatCompletionSystemMessageParam,
diff --git src/resources/fine-tuning/jobs/jobs.ts src/resources/fine-tuning/jobs/jobs.ts
index 0c320e028..44dd011aa 100644
--- src/resources/fine-tuning/jobs/jobs.ts
+++ src/resources/fine-tuning/jobs/jobs.ts
@@ -127,9 +127,8 @@ export interface FineTuningJob {
   finished_at: number | null;
 
   /**
-   * The hyperparameters used for the fine-tuning job. See the
-   * [fine-tuning guide](https://platform.openai.com/docs/guides/fine-tuning) for
-   * more details.
+   * The hyperparameters used for the fine-tuning job. This value will only be
+   * returned when running `supervised` jobs.
    */
   hyperparameters: FineTuningJob.Hyperparameters;
 
@@ -195,6 +194,11 @@ export interface FineTuningJob {
    * A list of integrations to enable for this fine-tuning job.
    */
   integrations?: Array<FineTuningJobWandbIntegrationObject> | null;
+
+  /**
+   * The method used for fine-tuning.
+   */
+  method?: FineTuningJob.Method;
 }
 
 export namespace FineTuningJob {
@@ -221,18 +225,125 @@ export namespace FineTuningJob {
   }
 
   /**
-   * The hyperparameters used for the fine-tuning job. See the
-   * [fine-tuning guide](https://platform.openai.com/docs/guides/fine-tuning) for
-   * more details.
+   * The hyperparameters used for the fine-tuning job. This value will only be
+   * returned when running `supervised` jobs.
    */
   export interface Hyperparameters {
+    /**
+     * Number of examples in each batch. A larger batch size means that model
+     * parameters are updated less frequently, but with lower variance.
+     */
+    batch_size?: 'auto' | number;
+
+    /**
+     * Scaling factor for the learning rate. A smaller learning rate may be useful to
+     * avoid overfitting.
+     */
+    learning_rate_multiplier?: 'auto' | number;
+
     /**
      * The number of epochs to train the model for. An epoch refers to one full cycle
-     * through the training dataset. "auto" decides the optimal number of epochs based
-     * on the size of the dataset. If setting the number manually, we support any
-     * number between 1 and 50 epochs.
+     * through the training dataset.
+     */
+    n_epochs?: 'auto' | number;
+  }
+
+  /**
+   * The method used for fine-tuning.
+   */
+  export interface Method {
+    /**
+     * Configuration for the DPO fine-tuning method.
+     */
+    dpo?: Method.Dpo;
+
+    /**
+     * Configuration for the supervised fine-tuning method.
      */
-    n_epochs: 'auto' | number;
+    supervised?: Method.Supervised;
+
+    /**
+     * The type of method. Is either `supervised` or `dpo`.
+     */
+    type?: 'supervised' | 'dpo';
+  }
+
+  export namespace Method {
+    /**
+     * Configuration for the DPO fine-tuning method.
+     */
+    export interface Dpo {
+      /**
+       * The hyperparameters used for the fine-tuning job.
+       */
+      hyperparameters?: Dpo.Hyperparameters;
+    }
+
+    export namespace Dpo {
+      /**
+       * The hyperparameters used for the fine-tuning job.
+       */
+      export interface Hyperparameters {
+        /**
+         * Number of examples in each batch. A larger batch size means that model
+         * parameters are updated less frequently, but with lower variance.
+         */
+        batch_size?: 'auto' | number;
+
+        /**
+         * The beta value for the DPO method. A higher beta value will increase the weight
+         * of the penalty between the policy and reference model.
+         */
+        beta?: 'auto' | number;
+
+        /**
+         * Scaling factor for the learning rate. A smaller learning rate may be useful to
+         * avoid overfitting.
+         */
+        learning_rate_multiplier?: 'auto' | number;
+
+        /**
+         * The number of epochs to train the model for. An epoch refers to one full cycle
+         * through the training dataset.
+         */
+        n_epochs?: 'auto' | number;
+      }
+    }
+
+    /**
+     * Configuration for the supervised fine-tuning method.
+     */
+    export interface Supervised {
+      /**
+       * The hyperparameters used for the fine-tuning job.
+       */
+      hyperparameters?: Supervised.Hyperparameters;
+    }
+
+    export namespace Supervised {
+      /**
+       * The hyperparameters used for the fine-tuning job.
+       */
+      export interface Hyperparameters {
+        /**
+         * Number of examples in each batch. A larger batch size means that model
+         * parameters are updated less frequently, but with lower variance.
+         */
+        batch_size?: 'auto' | number;
+
+        /**
+         * Scaling factor for the learning rate. A smaller learning rate may be useful to
+         * avoid overfitting.
+         */
+        learning_rate_multiplier?: 'auto' | number;
+
+        /**
+         * The number of epochs to train the model for. An epoch refers to one full cycle
+         * through the training dataset.
+         */
+        n_epochs?: 'auto' | number;
+      }
+    }
   }
 }
 
@@ -240,15 +351,40 @@ export namespace FineTuningJob {
  * Fine-tuning job event object
  */
 export interface FineTuningJobEvent {
+  /**
+   * The object identifier.
+   */
   id: string;
 
+  /**
+   * The Unix timestamp (in seconds) for when the fine-tuning job was created.
+   */
   created_at: number;
 
+  /**
+   * The log level of the event.
+   */
   level: 'info' | 'warn' | 'error';
 
+  /**
+   * The message of the event.
+   */
   message: string;
 
+  /**
+   * The object type, which is always "fine_tuning.job.event".
+   */
   object: 'fine_tuning.job.event';
+
+  /**
+   * The data associated with the event.
+   */
+  data?: unknown;
+
+  /**
+   * The type of event.
+   */
+  type?: 'message' | 'metrics';
 }
 
 export type FineTuningJobIntegration = FineTuningJobWandbIntegrationObject;
@@ -318,8 +454,10 @@ export interface JobCreateParams {
    * your file with the purpose `fine-tune`.
    *
    * The contents of the file should differ depending on if the model uses the
-   * [chat](https://platform.openai.com/docs/api-reference/fine-tuning/chat-input) or
+   * [chat](https://platform.openai.com/docs/api-reference/fine-tuning/chat-input),
    * [completions](https://platform.openai.com/docs/api-reference/fine-tuning/completions-input)
+   * format, or if the fine-tuning method uses the
+   * [preference](https://platform.openai.com/docs/api-reference/fine-tuning/preference-input)
    * format.
    *
    * See the [fine-tuning guide](https://platform.openai.com/docs/guides/fine-tuning)
@@ -328,7 +466,8 @@ export interface JobCreateParams {
   training_file: string;
 
   /**
-   * The hyperparameters used for the fine-tuning job.
+   * The hyperparameters used for the fine-tuning job. This value is now deprecated
+   * in favor of `method`, and should be passed in under the `method` parameter.
    */
   hyperparameters?: JobCreateParams.Hyperparameters;
 
@@ -337,6 +476,11 @@ export interface JobCreateParams {
    */
   integrations?: Array<JobCreateParams.Integration> | null;
 
+  /**
+   * The method used for fine-tuning.
+   */
+  method?: JobCreateParams.Method;
+
   /**
    * The seed controls the reproducibility of the job. Passing in the same seed and
    * job parameters should produce the same results, but may differ in rare cases. If
@@ -372,7 +516,9 @@ export interface JobCreateParams {
 
 export namespace JobCreateParams {
   /**
-   * The hyperparameters used for the fine-tuning job.
+   * @deprecated: The hyperparameters used for the fine-tuning job. This value is now
+   * deprecated in favor of `method`, and should be passed in under the `method`
+   * parameter.
    */
   export interface Hyperparameters {
     /**
@@ -444,6 +590,104 @@ export namespace JobCreateParams {
       tags?: Array<string>;
     }
   }
+
+  /**
+   * The method used for fine-tuning.
+   */
+  export interface Method {
+    /**
+     * Configuration for the DPO fine-tuning method.
+     */
+    dpo?: Method.Dpo;
+
+    /**
+     * Configuration for the supervised fine-tuning method.
+     */
+    supervised?: Method.Supervised;
+
+    /**
+     * The type of method. Is either `supervised` or `dpo`.
+     */
+    type?: 'supervised' | 'dpo';
+  }
+
+  export namespace Method {
+    /**
+     * Configuration for the DPO fine-tuning method.
+     */
+    export interface Dpo {
+      /**
+       * The hyperparameters used for the fine-tuning job.
+       */
+      hyperparameters?: Dpo.Hyperparameters;
+    }
+
+    export namespace Dpo {
+      /**
+       * The hyperparameters used for the fine-tuning job.
+       */
+      export interface Hyperparameters {
+        /**
+         * Number of examples in each batch. A larger batch size means that model
+         * parameters are updated less frequently, but with lower variance.
+         */
+        batch_size?: 'auto' | number;
+
+        /**
+         * The beta value for the DPO method. A higher beta value will increase the weight
+         * of the penalty between the policy and reference model.
+         */
+        beta?: 'auto' | number;
+
+        /**
+         * Scaling factor for the learning rate. A smaller learning rate may be useful to
+         * avoid overfitting.
+         */
+        learning_rate_multiplier?: 'auto' | number;
+
+        /**
+         * The number of epochs to train the model for. An epoch refers to one full cycle
+         * through the training dataset.
+         */
+        n_epochs?: 'auto' | number;
+      }
+    }
+
+    /**
+     * Configuration for the supervised fine-tuning method.
+     */
+    export interface Supervised {
+      /**
+       * The hyperparameters used for the fine-tuning job.
+       */
+      hyperparameters?: Supervised.Hyperparameters;
+    }
+
+    export namespace Supervised {
+      /**
+       * The hyperparameters used for the fine-tuning job.
+       */
+      export interface Hyperparameters {
+        /**
+         * Number of examples in each batch. A larger batch size means that model
+         * parameters are updated less frequently, but with lower variance.
+         */
+        batch_size?: 'auto' | number;
+
+        /**
+         * Scaling factor for the learning rate. A smaller learning rate may be useful to
+         * avoid overfitting.
+         */
+        learning_rate_multiplier?: 'auto' | number;
+
+        /**
+         * The number of epochs to train the model for. An epoch refers to one full cycle
+         * through the training dataset.
+         */
+        n_epochs?: 'auto' | number;
+      }
+    }
+  }
 }
 
 export interface JobListParams extends CursorPageParams {}
diff --git src/version.ts src/version.ts
index 01cd56405..fdf4e5224 100644
--- src/version.ts
+++ src/version.ts
@@ -1 +1 @@
-export const VERSION = '4.76.3'; // x-release-please-version
+export const VERSION = '4.77.0'; // x-release-please-version
diff --git tests/api-resources/chat/completions.test.ts tests/api-resources/chat/completions.test.ts
index 5dcbf9ad6..dfc09f69b 100644
--- tests/api-resources/chat/completions.test.ts
+++ tests/api-resources/chat/completions.test.ts
@@ -11,7 +11,7 @@ const client = new OpenAI({
 describe('resource completions', () => {
   test('create: only required params', async () => {
     const responsePromise = client.chat.completions.create({
-      messages: [{ content: 'string', role: 'system' }],
+      messages: [{ content: 'string', role: 'developer' }],
       model: 'gpt-4o',
     });
     const rawResponse = await responsePromise.asResponse();
@@ -25,7 +25,7 @@ describe('resource completions', () => {
 
   test('create: required and optional params', async () => {
     const response = await client.chat.completions.create({
-      messages: [{ content: 'string', role: 'system', name: 'name' }],
+      messages: [{ content: 'string', role: 'developer', name: 'name' }],
       model: 'gpt-4o',
       audio: { format: 'wav', voice: 'alloy' },
       frequency_penalty: -2,
@@ -41,6 +41,7 @@ describe('resource completions', () => {
       parallel_tool_calls: true,
       prediction: { content: 'string', type: 'content' },
       presence_penalty: -2,
+      reasoning_effort: 'low',
       response_format: { type: 'text' },
       seed: -9007199254740991,
       service_tier: 'auto',
diff --git tests/api-resources/fine-tuning/jobs/jobs.test.ts tests/api-resources/fine-tuning/jobs/jobs.test.ts
index 0ab09768a..4de83a8b7 100644
--- tests/api-resources/fine-tuning/jobs/jobs.test.ts
+++ tests/api-resources/fine-tuning/jobs/jobs.test.ts
@@ -34,6 +34,20 @@ describe('resource jobs', () => {
           wandb: { project: 'my-wandb-project', entity: 'entity', name: 'name', tags: ['custom-tag'] },
         },
       ],
+      method: {
+        dpo: {
+          hyperparameters: {
+            batch_size: 'auto',
+            beta: 'auto',
+            learning_rate_multiplier: 'auto',
+            n_epochs: 'auto',
+          },
+        },
+        supervised: {
+          hyperparameters: { batch_size: 'auto', learning_rate_multiplier: 'auto', n_epochs: 'auto' },
+        },
+        type: 'supervised',
+      },
       seed: 42,
       suffix: 'x',
       validation_file: 'file-abc123',
diff --git tests/index.test.ts tests/index.test.ts
index f39571121..bf113e7bb 100644
--- tests/index.test.ts
+++ tests/index.test.ts
@@ -177,7 +177,7 @@ describe('instantiate client', () => {
     expect(client.apiKey).toBe('My API Key');
   });
 
-  test('with overriden environment variable arguments', () => {
+  test('with overridden environment variable arguments', () => {
     // set options via env var
     process.env['OPENAI_API_KEY'] = 'another My API Key';
     const client = new OpenAI({ apiKey: 'My API Key' });

Description

This pull request (PR) updates the OpenAI Node.js client library to version 4.77.0. The changes include adding support for new models (o1 and GPT-4o), making several documentation updates, deprecating old methods, fixing typos, and adding new test cases to cover the updated functionality.

Possible Issues

  • There is a minor typo fix for "overriden" to "overridden" but no functional changes which suggests it might be overlooked.
  • The PR includes breaking changes by replacing the use of 'system' messages with 'developer' messages. This should be highlighted prominently to users.
Changes

Changes

  1. Version Bump:

    • Updated Files:

      • .release-please-manifest.json
      • package.json
      • version.ts
      • jsr.json
    • Description: Bump the version from 4.76.3 to 4.77.0.

  2. Configuration and Metadata Updates:

    • Updated Files:

      • .stats.yml
      • CHANGELOG.md
    • Description: Modify the OpenAPI spec URL, update the changelog with new features and changes.

  3. Documentation Updates:

    • Updated Files:

      • api.md
    • Description: Update the API documentation with new types and parameters.

  4. API and Model Changes:

    • Updated Files:

      • src/core.ts
      • src/index.ts
      • src/resources/chat/chat.ts
      • src/resources/chat/completions.ts
      • src/resources/fine-tuning/jobs/jobs.ts
    • Description: Introduce new types and parameters, such as ChatCompletionDeveloperMessageParam and ChatCompletionReasoningEffort. Deprecate old parameters and replace with new ones.

  5. Test Case Adjustments:

    • Updated Files:

      • tests/api-resources/chat/completions.test.ts
      • tests/api-resources/fine-tuning/jobs/jobs.test.ts
      • tests/index.test.ts
    • Description: Replace occurrences of 'system' with 'developer' in test cases, adding test cases for new functionality like reasoning_effort and new fine-tuning methods.

sequenceDiagram
  participant Developer
  participant Repo as Repository
  participant Client as Client Library
  participant API
  
  Developer->>Repo: Creates pull request
  Repo->>Client: Updates version number
  Client->>API: Adds new model support
  Client->>API: Replaces 'system' with 'developer'
  Client->>API: Adds new types and parameters
  Client->>Repo: Updates documentation
  Client->>Repo: Updates test cases
  Repo->>Developer: PR ready for review
Loading

Copy link

github-actions bot commented Jan 1, 2025

bedrock debug - [puLL-Merge] - openai/[email protected]

Diff
diff --git .release-please-manifest.json .release-please-manifest.json
index 52c31fe71..6b843f931 100644
--- .release-please-manifest.json
+++ .release-please-manifest.json
@@ -1,3 +1,3 @@
 {
-  ".": "4.76.3"
+  ".": "4.77.0"
 }
diff --git .stats.yml .stats.yml
index 3cc042fe0..7b5235e3c 100644
--- .stats.yml
+++ .stats.yml
@@ -1,2 +1,2 @@
 configured_endpoints: 68
-openapi_spec_url: https://storage.googleapis.com/stainless-sdk-openapi-specs/openai-2e0e0678be19d1118fd796af291822075e40538dba326611e177e9f3dc245a53.yml
+openapi_spec_url: https://storage.googleapis.com/stainless-sdk-openapi-specs/openai-0d64ca9e45f51b4279f87b205eeb3a3576df98407698ce053f2e2302c1c08df1.yml
diff --git CHANGELOG.md CHANGELOG.md
index 4b6f57fe4..d33ce4c1a 100644
--- CHANGELOG.md
+++ CHANGELOG.md
@@ -1,5 +1,19 @@
 # Changelog
 
+## 4.77.0 (2024-12-17)
+
+Full Changelog: [v4.76.3...v4.77.0](https://github.com/openai/openai-node/compare/v4.76.3...v4.77.0)
+
+### Features
+
+* **api:** new o1 and GPT-4o models + preference fine-tuning ([#1229](https://github.com/openai/openai-node/issues/1229)) ([2e872d4](https://github.com/openai/openai-node/commit/2e872d4ac3717ab8f61741efffb7a31acd798338))
+
+
+### Chores
+
+* **internal:** fix some typos ([#1227](https://github.com/openai/openai-node/issues/1227)) ([d51fcfe](https://github.com/openai/openai-node/commit/d51fcfe3a66550a684eeeb0e6f17e1d9825cdf78))
+* **internal:** spec update ([#1230](https://github.com/openai/openai-node/issues/1230)) ([ed2b61d](https://github.com/openai/openai-node/commit/ed2b61d32703b64d9f91223bc02627a607f60483))
+
 ## 4.76.3 (2024-12-13)
 
 Full Changelog: [v4.76.2...v4.76.3](https://github.com/openai/openai-node/compare/v4.76.2...v4.76.3)
diff --git api.md api.md
index 465730de8..54bcf08d7 100644
--- api.md
+++ api.md
@@ -41,6 +41,7 @@ Types:
 - <code><a href="./src/resources/chat/completions.ts">ChatCompletionContentPartInputAudio</a></code>
 - <code><a href="./src/resources/chat/completions.ts">ChatCompletionContentPartRefusal</a></code>
 - <code><a href="./src/resources/chat/completions.ts">ChatCompletionContentPartText</a></code>
+- <code><a href="./src/resources/chat/completions.ts">ChatCompletionDeveloperMessageParam</a></code>
 - <code><a href="./src/resources/chat/completions.ts">ChatCompletionFunctionCallOption</a></code>
 - <code><a href="./src/resources/chat/completions.ts">ChatCompletionFunctionMessageParam</a></code>
 - <code><a href="./src/resources/chat/completions.ts">ChatCompletionMessage</a></code>
@@ -49,6 +50,7 @@ Types:
 - <code><a href="./src/resources/chat/completions.ts">ChatCompletionModality</a></code>
 - <code><a href="./src/resources/chat/completions.ts">ChatCompletionNamedToolChoice</a></code>
 - <code><a href="./src/resources/chat/completions.ts">ChatCompletionPredictionContent</a></code>
+- <code><a href="./src/resources/chat/completions.ts">ChatCompletionReasoningEffort</a></code>
 - <code><a href="./src/resources/chat/completions.ts">ChatCompletionRole</a></code>
 - <code><a href="./src/resources/chat/completions.ts">ChatCompletionStreamOptions</a></code>
 - <code><a href="./src/resources/chat/completions.ts">ChatCompletionSystemMessageParam</a></code>
diff --git jsr.json jsr.json
index ef9ce6848..d76a2040e 100644
--- jsr.json
+++ jsr.json
@@ -1,6 +1,6 @@
 {
   "name": "@openai/openai",
-  "version": "4.76.3",
+  "version": "4.77.0",
   "exports": "./index.ts",
   "publish": {
     "exclude": [
diff --git package.json package.json
index 47f363ba1..54633aa5d 100644
--- package.json
+++ package.json
@@ -1,6 +1,6 @@
 {
   "name": "openai",
-  "version": "4.76.3",
+  "version": "4.77.0",
   "description": "The official TypeScript library for the OpenAI API",
   "author": "OpenAI <[email protected]>",
   "types": "dist/index.d.ts",
diff --git src/core.ts src/core.ts
index e1a93f272..68f1e676a 100644
--- src/core.ts
+++ src/core.ts
@@ -198,7 +198,7 @@ export abstract class APIClient {
     maxRetries = 2,
     timeout = 600000, // 10 minutes
     httpAgent,
-    fetch: overridenFetch,
+    fetch: overriddenFetch,
   }: {
     baseURL: string;
     maxRetries?: number | undefined;
@@ -211,7 +211,7 @@ export abstract class APIClient {
     this.timeout = validatePositiveInteger('timeout', timeout);
     this.httpAgent = httpAgent;
 
-    this.fetch = overridenFetch ?? fetch;
+    this.fetch = overriddenFetch ?? fetch;
   }
 
   protected authHeaders(opts: FinalRequestOptions): Headers {
diff --git src/index.ts src/index.ts
index 58d7410e4..2320850fb 100644
--- src/index.ts
+++ src/index.ts
@@ -80,6 +80,7 @@ import {
   ChatCompletionCreateParams,
   ChatCompletionCreateParamsNonStreaming,
   ChatCompletionCreateParamsStreaming,
+  ChatCompletionDeveloperMessageParam,
   ChatCompletionFunctionCallOption,
   ChatCompletionFunctionMessageParam,
   ChatCompletionMessage,
@@ -88,6 +89,7 @@ import {
   ChatCompletionModality,
   ChatCompletionNamedToolChoice,
   ChatCompletionPredictionContent,
+  ChatCompletionReasoningEffort,
   ChatCompletionRole,
   ChatCompletionStreamOptions,
   ChatCompletionSystemMessageParam,
@@ -353,6 +355,7 @@ export declare namespace OpenAI {
     type ChatCompletionContentPartInputAudio as ChatCompletionContentPartInputAudio,
     type ChatCompletionContentPartRefusal as ChatCompletionContentPartRefusal,
     type ChatCompletionContentPartText as ChatCompletionContentPartText,
+    type ChatCompletionDeveloperMessageParam as ChatCompletionDeveloperMessageParam,
     type ChatCompletionFunctionCallOption as ChatCompletionFunctionCallOption,
     type ChatCompletionFunctionMessageParam as ChatCompletionFunctionMessageParam,
     type ChatCompletionMessage as ChatCompletionMessage,
@@ -361,6 +364,7 @@ export declare namespace OpenAI {
     type ChatCompletionModality as ChatCompletionModality,
     type ChatCompletionNamedToolChoice as ChatCompletionNamedToolChoice,
     type ChatCompletionPredictionContent as ChatCompletionPredictionContent,
+    type ChatCompletionReasoningEffort as ChatCompletionReasoningEffort,
     type ChatCompletionRole as ChatCompletionRole,
     type ChatCompletionStreamOptions as ChatCompletionStreamOptions,
     type ChatCompletionSystemMessageParam as ChatCompletionSystemMessageParam,
diff --git src/resources/chat/chat.ts src/resources/chat/chat.ts
index 09cd3d123..2230b19bd 100644
--- src/resources/chat/chat.ts
+++ src/resources/chat/chat.ts
@@ -16,6 +16,7 @@ import {
   ChatCompletionCreateParams,
   ChatCompletionCreateParamsNonStreaming,
   ChatCompletionCreateParamsStreaming,
+  ChatCompletionDeveloperMessageParam,
   ChatCompletionFunctionCallOption,
   ChatCompletionFunctionMessageParam,
   ChatCompletionMessage,
@@ -24,6 +25,7 @@ import {
   ChatCompletionModality,
   ChatCompletionNamedToolChoice,
   ChatCompletionPredictionContent,
+  ChatCompletionReasoningEffort,
   ChatCompletionRole,
   ChatCompletionStreamOptions,
   ChatCompletionSystemMessageParam,
@@ -44,6 +46,8 @@ export class Chat extends APIResource {
 }
 
 export type ChatModel =
+  | 'o1'
+  | 'o1-2024-12-17'
   | 'o1-preview'
   | 'o1-preview-2024-09-12'
   | 'o1-mini'
@@ -52,10 +56,11 @@ export type ChatModel =
   | 'gpt-4o-2024-11-20'
   | 'gpt-4o-2024-08-06'
   | 'gpt-4o-2024-05-13'
-  | 'gpt-4o-realtime-preview'
-  | 'gpt-4o-realtime-preview-2024-10-01'
   | 'gpt-4o-audio-preview'
   | 'gpt-4o-audio-preview-2024-10-01'
+  | 'gpt-4o-audio-preview-2024-12-17'
+  | 'gpt-4o-mini-audio-preview'
+  | 'gpt-4o-mini-audio-preview-2024-12-17'
   | 'chatgpt-4o-latest'
   | 'gpt-4o-mini'
   | 'gpt-4o-mini-2024-07-18'
@@ -96,6 +101,7 @@ export declare namespace Chat {
     type ChatCompletionContentPartInputAudio as ChatCompletionContentPartInputAudio,
     type ChatCompletionContentPartRefusal as ChatCompletionContentPartRefusal,
     type ChatCompletionContentPartText as ChatCompletionContentPartText,
+    type ChatCompletionDeveloperMessageParam as ChatCompletionDeveloperMessageParam,
     type ChatCompletionFunctionCallOption as ChatCompletionFunctionCallOption,
     type ChatCompletionFunctionMessageParam as ChatCompletionFunctionMessageParam,
     type ChatCompletionMessage as ChatCompletionMessage,
@@ -104,6 +110,7 @@ export declare namespace Chat {
     type ChatCompletionModality as ChatCompletionModality,
     type ChatCompletionNamedToolChoice as ChatCompletionNamedToolChoice,
     type ChatCompletionPredictionContent as ChatCompletionPredictionContent,
+    type ChatCompletionReasoningEffort as ChatCompletionReasoningEffort,
     type ChatCompletionRole as ChatCompletionRole,
     type ChatCompletionStreamOptions as ChatCompletionStreamOptions,
     type ChatCompletionSystemMessageParam as ChatCompletionSystemMessageParam,
diff --git src/resources/chat/completions.ts src/resources/chat/completions.ts
index 8e9a4385e..31f5814cb 100644
--- src/resources/chat/completions.ts
+++ src/resources/chat/completions.ts
@@ -15,6 +15,12 @@ export class Completions extends APIResource {
    * [text generation](https://platform.openai.com/docs/guides/text-generation),
    * [vision](https://platform.openai.com/docs/guides/vision), and
    * [audio](https://platform.openai.com/docs/guides/audio) guides.
+   *
+   * Parameter support can differ depending on the model used to generate the
+   * response, particularly for newer reasoning models. Parameters that are only
+   * supported for reasoning models are noted below. For the current state of
+   * unsupported parameters in reasoning models,
+   * [refer to the reasoning guide](https://platform.openai.com/docs/guides/reasoning).
    */
   create(
     body: ChatCompletionCreateParamsNonStreaming,
@@ -135,6 +141,9 @@ export namespace ChatCompletion {
   }
 }
 
+/**
+ * Messages sent by the model in response to user messages.
+ */
 export interface ChatCompletionAssistantMessageParam {
   /**
    * The role of the messages author, in this case `assistant`.
@@ -530,6 +539,29 @@ export interface ChatCompletionContentPartText {
   type: 'text';
 }
 
+/**
+ * Developer-provided instructions that the model should follow, regardless of
+ * messages sent by the user. With o1 models and newer, `developer` messages
+ * replace the previous `system` messages.
+ */
+export interface ChatCompletionDeveloperMessageParam {
+  /**
+   * The contents of the developer message.
+   */
+  content: string | Array<ChatCompletionContentPartText>;
+
+  /**
+   * The role of the messages author, in this case `developer`.
+   */
+  role: 'developer';
+
+  /**
+   * An optional name for the participant. Provides the model information to
+   * differentiate between participants of the same role.
+   */
+  name?: string;
+}
+
 /**
  * Specifying a particular function via `{"name": "my_function"}` forces the model
  * to call that function.
@@ -620,7 +652,13 @@ export namespace ChatCompletionMessage {
   }
 }
 
+/**
+ * Developer-provided instructions that the model should follow, regardless of
+ * messages sent by the user. With o1 models and newer, `developer` messages
+ * replace the previous `system` messages.
+ */
 export type ChatCompletionMessageParam =
+  | ChatCompletionDeveloperMessageParam
   | ChatCompletionSystemMessageParam
   | ChatCompletionUserMessageParam
   | ChatCompletionAssistantMessageParam
@@ -707,6 +745,16 @@ export interface ChatCompletionPredictionContent {
   type: 'content';
 }
 
+/**
+ * **o1 models only**
+ *
+ * Constrains effort on reasoning for
+ * [reasoning models](https://platform.openai.com/docs/guides/reasoning). Currently
+ * supported values are `low`, `medium`, and `high`. Reducing reasoning effort can
+ * result in faster responses and fewer tokens used on reasoning in a response.
+ */
+export type ChatCompletionReasoningEffort = 'low' | 'medium' | 'high';
+
 /**
  * The role of the author of a message
  */
@@ -725,6 +773,11 @@ export interface ChatCompletionStreamOptions {
   include_usage?: boolean;
 }
 
+/**
+ * Developer-provided instructions that the model should follow, regardless of
+ * messages sent by the user. With o1 models and newer, use `developer` messages
+ * for this purpose instead.
+ */
 export interface ChatCompletionSystemMessageParam {
   /**
    * The contents of the system message.
@@ -835,6 +888,10 @@ export interface ChatCompletionToolMessageParam {
   tool_call_id: string;
 }
 
+/**
+ * Messages sent by an end user, containing prompts or additional context
+ * information.
+ */
 export interface ChatCompletionUserMessageParam {
   /**
    * The contents of the user message.
@@ -891,20 +948,22 @@ export interface ChatCompletionCreateParamsBase {
    * Number between -2.0 and 2.0. Positive values penalize new tokens based on their
    * existing frequency in the text so far, decreasing the model's likelihood to
    * repeat the same line verbatim.
-   *
-   * [See more information about frequency and presence penalties.](https://platform.openai.com/docs/guides/text-generation)
    */
   frequency_penalty?: number | null;
 
   /**
    * Deprecated in favor of `tool_choice`.
    *
-   * Controls which (if any) function is called by the model. `none` means the model
-   * will not call a function and instead generates a message. `auto` means the model
-   * can pick between generating a message or calling a function. Specifying a
-   * particular function via `{"name": "my_function"}` forces the model to call that
+   * Controls which (if any) function is called by the model.
+   *
+   * `none` means the model will not call a function and instead generates a message.
+   *
+   * `auto` means the model can pick between generating a message or calling a
    * function.
    *
+   * Specifying a particular function via `{"name": "my_function"}` forces the model
+   * to call that function.
+   *
    * `none` is the default when no functions are present. `auto` is the default if
    * functions are present.
    */
@@ -998,17 +1057,21 @@ export interface ChatCompletionCreateParamsBase {
    * Number between -2.0 and 2.0. Positive values penalize new tokens based on
    * whether they appear in the text so far, increasing the model's likelihood to
    * talk about new topics.
-   *
-   * [See more information about frequency and presence penalties.](https://platform.openai.com/docs/guides/text-generation)
    */
   presence_penalty?: number | null;
 
   /**
-   * An object specifying the format that the model must output. Compatible with
-   * [GPT-4o](https://platform.openai.com/docs/models#gpt-4o),
-   * [GPT-4o mini](https://platform.openai.com/docs/models#gpt-4o-mini),
-   * [GPT-4 Turbo](https://platform.openai.com/docs/models#gpt-4-turbo-and-gpt-4) and
-   * all GPT-3.5 Turbo models newer than `gpt-3.5-turbo-1106`.
+   * **o1 models only**
+   *
+   * Constrains effort on reasoning for
+   * [reasoning models](https://platform.openai.com/docs/guides/reasoning). Currently
+   * supported values are `low`, `medium`, and `high`. Reducing reasoning effort can
+   * result in faster responses and fewer tokens used on reasoning in a response.
+   */
+  reasoning_effort?: ChatCompletionReasoningEffort;
+
+  /**
+   * An object specifying the format that the model must output.
    *
    * Setting to `{ "type": "json_schema", "json_schema": {...} }` enables Structured
    * Outputs which ensures the model will match your supplied JSON schema. Learn more
@@ -1088,9 +1151,8 @@ export interface ChatCompletionCreateParamsBase {
   /**
    * What sampling temperature to use, between 0 and 2. Higher values like 0.8 will
    * make the output more random, while lower values like 0.2 will make it more
-   * focused and deterministic.
-   *
-   * We generally recommend altering this or `top_p` but not both.
+   * focused and deterministic. We generally recommend altering this or `top_p` but
+   * not both.
    */
   temperature?: number | null;
 
@@ -1223,6 +1285,7 @@ export declare namespace Completions {
     type ChatCompletionContentPartInputAudio as ChatCompletionContentPartInputAudio,
     type ChatCompletionContentPartRefusal as ChatCompletionContentPartRefusal,
     type ChatCompletionContentPartText as ChatCompletionContentPartText,
+    type ChatCompletionDeveloperMessageParam as ChatCompletionDeveloperMessageParam,
     type ChatCompletionFunctionCallOption as ChatCompletionFunctionCallOption,
     type ChatCompletionFunctionMessageParam as ChatCompletionFunctionMessageParam,
     type ChatCompletionMessage as ChatCompletionMessage,
@@ -1231,6 +1294,7 @@ export declare namespace Completions {
     type ChatCompletionModality as ChatCompletionModality,
     type ChatCompletionNamedToolChoice as ChatCompletionNamedToolChoice,
     type ChatCompletionPredictionContent as ChatCompletionPredictionContent,
+    type ChatCompletionReasoningEffort as ChatCompletionReasoningEffort,
     type ChatCompletionRole as ChatCompletionRole,
     type ChatCompletionStreamOptions as ChatCompletionStreamOptions,
     type ChatCompletionSystemMessageParam as ChatCompletionSystemMessageParam,
diff --git src/resources/chat/index.ts src/resources/chat/index.ts
index 262bf75a2..c3be19402 100644
--- src/resources/chat/index.ts
+++ src/resources/chat/index.ts
@@ -13,6 +13,7 @@ export {
   type ChatCompletionContentPartInputAudio,
   type ChatCompletionContentPartRefusal,
   type ChatCompletionContentPartText,
+  type ChatCompletionDeveloperMessageParam,
   type ChatCompletionFunctionCallOption,
   type ChatCompletionFunctionMessageParam,
   type ChatCompletionMessage,
@@ -21,6 +22,7 @@ export {
   type ChatCompletionModality,
   type ChatCompletionNamedToolChoice,
   type ChatCompletionPredictionContent,
+  type ChatCompletionReasoningEffort,
   type ChatCompletionRole,
   type ChatCompletionStreamOptions,
   type ChatCompletionSystemMessageParam,
diff --git src/resources/fine-tuning/jobs/jobs.ts src/resources/fine-tuning/jobs/jobs.ts
index 0c320e028..44dd011aa 100644
--- src/resources/fine-tuning/jobs/jobs.ts
+++ src/resources/fine-tuning/jobs/jobs.ts
@@ -127,9 +127,8 @@ export interface FineTuningJob {
   finished_at: number | null;
 
   /**
-   * The hyperparameters used for the fine-tuning job. See the
-   * [fine-tuning guide](https://platform.openai.com/docs/guides/fine-tuning) for
-   * more details.
+   * The hyperparameters used for the fine-tuning job. This value will only be
+   * returned when running `supervised` jobs.
    */
   hyperparameters: FineTuningJob.Hyperparameters;
 
@@ -195,6 +194,11 @@ export interface FineTuningJob {
    * A list of integrations to enable for this fine-tuning job.
    */
   integrations?: Array<FineTuningJobWandbIntegrationObject> | null;
+
+  /**
+   * The method used for fine-tuning.
+   */
+  method?: FineTuningJob.Method;
 }
 
 export namespace FineTuningJob {
@@ -221,18 +225,125 @@ export namespace FineTuningJob {
   }
 
   /**
-   * The hyperparameters used for the fine-tuning job. See the
-   * [fine-tuning guide](https://platform.openai.com/docs/guides/fine-tuning) for
-   * more details.
+   * The hyperparameters used for the fine-tuning job. This value will only be
+   * returned when running `supervised` jobs.
    */
   export interface Hyperparameters {
+    /**
+     * Number of examples in each batch. A larger batch size means that model
+     * parameters are updated less frequently, but with lower variance.
+     */
+    batch_size?: 'auto' | number;
+
+    /**
+     * Scaling factor for the learning rate. A smaller learning rate may be useful to
+     * avoid overfitting.
+     */
+    learning_rate_multiplier?: 'auto' | number;
+
     /**
      * The number of epochs to train the model for. An epoch refers to one full cycle
-     * through the training dataset. "auto" decides the optimal number of epochs based
-     * on the size of the dataset. If setting the number manually, we support any
-     * number between 1 and 50 epochs.
+     * through the training dataset.
+     */
+    n_epochs?: 'auto' | number;
+  }
+
+  /**
+   * The method used for fine-tuning.
+   */
+  export interface Method {
+    /**
+     * Configuration for the DPO fine-tuning method.
+     */
+    dpo?: Method.Dpo;
+
+    /**
+     * Configuration for the supervised fine-tuning method.
      */
-    n_epochs: 'auto' | number;
+    supervised?: Method.Supervised;
+
+    /**
+     * The type of method. Is either `supervised` or `dpo`.
+     */
+    type?: 'supervised' | 'dpo';
+  }
+
+  export namespace Method {
+    /**
+     * Configuration for the DPO fine-tuning method.
+     */
+    export interface Dpo {
+      /**
+       * The hyperparameters used for the fine-tuning job.
+       */
+      hyperparameters?: Dpo.Hyperparameters;
+    }
+
+    export namespace Dpo {
+      /**
+       * The hyperparameters used for the fine-tuning job.
+       */
+      export interface Hyperparameters {
+        /**
+         * Number of examples in each batch. A larger batch size means that model
+         * parameters are updated less frequently, but with lower variance.
+         */
+        batch_size?: 'auto' | number;
+
+        /**
+         * The beta value for the DPO method. A higher beta value will increase the weight
+         * of the penalty between the policy and reference model.
+         */
+        beta?: 'auto' | number;
+
+        /**
+         * Scaling factor for the learning rate. A smaller learning rate may be useful to
+         * avoid overfitting.
+         */
+        learning_rate_multiplier?: 'auto' | number;
+
+        /**
+         * The number of epochs to train the model for. An epoch refers to one full cycle
+         * through the training dataset.
+         */
+        n_epochs?: 'auto' | number;
+      }
+    }
+
+    /**
+     * Configuration for the supervised fine-tuning method.
+     */
+    export interface Supervised {
+      /**
+       * The hyperparameters used for the fine-tuning job.
+       */
+      hyperparameters?: Supervised.Hyperparameters;
+    }
+
+    export namespace Supervised {
+      /**
+       * The hyperparameters used for the fine-tuning job.
+       */
+      export interface Hyperparameters {
+        /**
+         * Number of examples in each batch. A larger batch size means that model
+         * parameters are updated less frequently, but with lower variance.
+         */
+        batch_size?: 'auto' | number;
+
+        /**
+         * Scaling factor for the learning rate. A smaller learning rate may be useful to
+         * avoid overfitting.
+         */
+        learning_rate_multiplier?: 'auto' | number;
+
+        /**
+         * The number of epochs to train the model for. An epoch refers to one full cycle
+         * through the training dataset.
+         */
+        n_epochs?: 'auto' | number;
+      }
+    }
   }
 }
 
@@ -240,15 +351,40 @@ export namespace FineTuningJob {
  * Fine-tuning job event object
  */
 export interface FineTuningJobEvent {
+  /**
+   * The object identifier.
+   */
   id: string;
 
+  /**
+   * The Unix timestamp (in seconds) for when the fine-tuning job was created.
+   */
   created_at: number;
 
+  /**
+   * The log level of the event.
+   */
   level: 'info' | 'warn' | 'error';
 
+  /**
+   * The message of the event.
+   */
   message: string;
 
+  /**
+   * The object type, which is always "fine_tuning.job.event".
+   */
   object: 'fine_tuning.job.event';
+
+  /**
+   * The data associated with the event.
+   */
+  data?: unknown;
+
+  /**
+   * The type of event.
+   */
+  type?: 'message' | 'metrics';
 }
 
 export type FineTuningJobIntegration = FineTuningJobWandbIntegrationObject;
@@ -318,8 +454,10 @@ export interface JobCreateParams {
    * your file with the purpose `fine-tune`.
    *
    * The contents of the file should differ depending on if the model uses the
-   * [chat](https://platform.openai.com/docs/api-reference/fine-tuning/chat-input) or
+   * [chat](https://platform.openai.com/docs/api-reference/fine-tuning/chat-input),
    * [completions](https://platform.openai.com/docs/api-reference/fine-tuning/completions-input)
+   * format, or if the fine-tuning method uses the
+   * [preference](https://platform.openai.com/docs/api-reference/fine-tuning/preference-input)
    * format.
    *
    * See the [fine-tuning guide](https://platform.openai.com/docs/guides/fine-tuning)
@@ -328,7 +466,8 @@ export interface JobCreateParams {
   training_file: string;
 
   /**
-   * The hyperparameters used for the fine-tuning job.
+   * The hyperparameters used for the fine-tuning job. This value is now deprecated
+   * in favor of `method`, and should be passed in under the `method` parameter.
    */
   hyperparameters?: JobCreateParams.Hyperparameters;
 
@@ -337,6 +476,11 @@ export interface JobCreateParams {
    */
   integrations?: Array<JobCreateParams.Integration> | null;
 
+  /**
+   * The method used for fine-tuning.
+   */
+  method?: JobCreateParams.Method;
+
   /**
    * The seed controls the reproducibility of the job. Passing in the same seed and
    * job parameters should produce the same results, but may differ in rare cases. If
@@ -372,7 +516,9 @@ export interface JobCreateParams {
 
 export namespace JobCreateParams {
   /**
-   * The hyperparameters used for the fine-tuning job.
+   * @deprecated: The hyperparameters used for the fine-tuning job. This value is now
+   * deprecated in favor of `method`, and should be passed in under the `method`
+   * parameter.
    */
   export interface Hyperparameters {
     /**
@@ -444,6 +590,104 @@ export namespace JobCreateParams {
       tags?: Array<string>;
     }
   }
+
+  /**
+   * The method used for fine-tuning.
+   */
+  export interface Method {
+    /**
+     * Configuration for the DPO fine-tuning method.
+     */
+    dpo?: Method.Dpo;
+
+    /**
+     * Configuration for the supervised fine-tuning method.
+     */
+    supervised?: Method.Supervised;
+
+    /**
+     * The type of method. Is either `supervised` or `dpo`.
+     */
+    type?: 'supervised' | 'dpo';
+  }
+
+  export namespace Method {
+    /**
+     * Configuration for the DPO fine-tuning method.
+     */
+    export interface Dpo {
+      /**
+       * The hyperparameters used for the fine-tuning job.
+       */
+      hyperparameters?: Dpo.Hyperparameters;
+    }
+
+    export namespace Dpo {
+      /**
+       * The hyperparameters used for the fine-tuning job.
+       */
+      export interface Hyperparameters {
+        /**
+         * Number of examples in each batch. A larger batch size means that model
+         * parameters are updated less frequently, but with lower variance.
+         */
+        batch_size?: 'auto' | number;
+
+        /**
+         * The beta value for the DPO method. A higher beta value will increase the weight
+         * of the penalty between the policy and reference model.
+         */
+        beta?: 'auto' | number;
+
+        /**
+         * Scaling factor for the learning rate. A smaller learning rate may be useful to
+         * avoid overfitting.
+         */
+        learning_rate_multiplier?: 'auto' | number;
+
+        /**
+         * The number of epochs to train the model for. An epoch refers to one full cycle
+         * through the training dataset.
+         */
+        n_epochs?: 'auto' | number;
+      }
+    }
+
+    /**
+     * Configuration for the supervised fine-tuning method.
+     */
+    export interface Supervised {
+      /**
+       * The hyperparameters used for the fine-tuning job.
+       */
+      hyperparameters?: Supervised.Hyperparameters;
+    }
+
+    export namespace Supervised {
+      /**
+       * The hyperparameters used for the fine-tuning job.
+       */
+      export interface Hyperparameters {
+        /**
+         * Number of examples in each batch. A larger batch size means that model
+         * parameters are updated less frequently, but with lower variance.
+         */
+        batch_size?: 'auto' | number;
+
+        /**
+         * Scaling factor for the learning rate. A smaller learning rate may be useful to
+         * avoid overfitting.
+         */
+        learning_rate_multiplier?: 'auto' | number;
+
+        /**
+         * The number of epochs to train the model for. An epoch refers to one full cycle
+         * through the training dataset.
+         */
+        n_epochs?: 'auto' | number;
+      }
+    }
+  }
 }
 
 export interface JobListParams extends CursorPageParams {}
diff --git src/version.ts src/version.ts
index 01cd56405..fdf4e5224 100644
--- src/version.ts
+++ src/version.ts
@@ -1 +1 @@
-export const VERSION = '4.76.3'; // x-release-please-version
+export const VERSION = '4.77.0'; // x-release-please-version
diff --git tests/api-resources/chat/completions.test.ts tests/api-resources/chat/completions.test.ts
index 5dcbf9ad6..dfc09f69b 100644
--- tests/api-resources/chat/completions.test.ts
+++ tests/api-resources/chat/completions.test.ts
@@ -11,7 +11,7 @@ const client = new OpenAI({
 describe('resource completions', () => {
   test('create: only required params', async () => {
     const responsePromise = client.chat.completions.create({
-      messages: [{ content: 'string', role: 'system' }],
+      messages: [{ content: 'string', role: 'developer' }],
       model: 'gpt-4o',
     });
     const rawResponse = await responsePromise.asResponse();
@@ -25,7 +25,7 @@ describe('resource completions', () => {
 
   test('create: required and optional params', async () => {
     const response = await client.chat.completions.create({
-      messages: [{ content: 'string', role: 'system', name: 'name' }],
+      messages: [{ content: 'string', role: 'developer', name: 'name' }],
       model: 'gpt-4o',
       audio: { format: 'wav', voice: 'alloy' },
       frequency_penalty: -2,
@@ -41,6 +41,7 @@ describe('resource completions', () => {
       parallel_tool_calls: true,
       prediction: { content: 'string', type: 'content' },
       presence_penalty: -2,
+      reasoning_effort: 'low',
       response_format: { type: 'text' },
       seed: -9007199254740991,
       service_tier: 'auto',
diff --git tests/api-resources/fine-tuning/jobs/jobs.test.ts tests/api-resources/fine-tuning/jobs/jobs.test.ts
index 0ab09768a..4de83a8b7 100644
--- tests/api-resources/fine-tuning/jobs/jobs.test.ts
+++ tests/api-resources/fine-tuning/jobs/jobs.test.ts
@@ -34,6 +34,20 @@ describe('resource jobs', () => {
           wandb: { project: 'my-wandb-project', entity: 'entity', name: 'name', tags: ['custom-tag'] },
         },
       ],
+      method: {
+        dpo: {
+          hyperparameters: {
+            batch_size: 'auto',
+            beta: 'auto',
+            learning_rate_multiplier: 'auto',
+            n_epochs: 'auto',
+          },
+        },
+        supervised: {
+          hyperparameters: { batch_size: 'auto', learning_rate_multiplier: 'auto', n_epochs: 'auto' },
+        },
+        type: 'supervised',
+      },
       seed: 42,
       suffix: 'x',
       validation_file: 'file-abc123',
diff --git tests/index.test.ts tests/index.test.ts
index f39571121..bf113e7bb 100644
--- tests/index.test.ts
+++ tests/index.test.ts
@@ -177,7 +177,7 @@ describe('instantiate client', () => {
     expect(client.apiKey).toBe('My API Key');
   });
 
-  test('with overriden environment variable arguments', () => {
+  test('with overridden environment variable arguments', () => {
     // set options via env var
     process.env['OPENAI_API_KEY'] = 'another My API Key';
     const client = new OpenAI({ apiKey: 'My API Key' });

Description

This PR updates the OpenAI Node.js library to version 4.77.0. It introduces new features, updates API models, and includes minor fixes and improvements.

Changes

Changes

  1. .release-please-manifest.json:

    • Updated version to 4.77.0
  2. .stats.yml:

    • Updated OpenAPI spec URL
  3. CHANGELOG.md:

    • Added entry for version 4.77.0
    • New features: o1 and GPT-4o models, preference fine-tuning
    • Chores: Fixed typos, updated internal spec
  4. api.md:

    • Added new types: ChatCompletionDeveloperMessageParam, ChatCompletionReasoningEffort
  5. jsr.json and package.json:

    • Updated version to 4.77.0
  6. src/core.ts:

    • Renamed overridenFetch to overriddenFetch
  7. src/index.ts, src/resources/chat/chat.ts, src/resources/chat/completions.ts, src/resources/chat/index.ts:

    • Added new types and models
    • Updated existing types with new parameters and descriptions
    • Added support for o1 models and reasoning models
  8. src/resources/fine-tuning/jobs/jobs.ts:

    • Updated FineTuningJob interface with new fields and types
    • Added support for DPO (Direct Preference Optimization) fine-tuning method
  9. src/version.ts:

    • Updated version to 4.77.0
  10. tests/api-resources/chat/completions.test.ts:

    • Updated test cases to use developer role instead of system
    • Added reasoning_effort parameter in test
  11. tests/api-resources/fine-tuning/jobs/jobs.test.ts:

    • Added new method parameter in test case
  12. tests/index.test.ts:

    • Fixed typo in test description
sequenceDiagram
    participant Dev as Developer
    participant PR as Pull Request
    participant Lib as OpenAI Library
    participant API as OpenAI API

    Dev->>PR: Create Pull Request
    PR->>Lib: Update version to 4.77.0
    Lib->>Lib: Add new models (o1, GPT-4o)
    Lib->>Lib: Update types and interfaces
    Lib->>Lib: Add support for DPO fine-tuning
    Lib->>Lib: Update tests
    Lib->>API: Support new API features
    API->>Lib: Provide updated functionality
    Lib->>PR: Integrate changes
    PR->>Dev: Ready for review and merge
Loading

Possible Issues

No major issues identified. The changes appear to be well-structured and documented.

Security Hotspots

No significant security concerns identified in this update.

Copy link

github-actions bot commented Jan 1, 2025

anthropic debug - [puLL-Merge] - openai/[email protected]

Diff
diff --git .release-please-manifest.json .release-please-manifest.json
index 52c31fe71..6b843f931 100644
--- .release-please-manifest.json
+++ .release-please-manifest.json
@@ -1,3 +1,3 @@
 {
-  ".": "4.76.3"
+  ".": "4.77.0"
 }
diff --git .stats.yml .stats.yml
index 3cc042fe0..7b5235e3c 100644
--- .stats.yml
+++ .stats.yml
@@ -1,2 +1,2 @@
 configured_endpoints: 68
-openapi_spec_url: https://storage.googleapis.com/stainless-sdk-openapi-specs/openai-2e0e0678be19d1118fd796af291822075e40538dba326611e177e9f3dc245a53.yml
+openapi_spec_url: https://storage.googleapis.com/stainless-sdk-openapi-specs/openai-0d64ca9e45f51b4279f87b205eeb3a3576df98407698ce053f2e2302c1c08df1.yml
diff --git CHANGELOG.md CHANGELOG.md
index 4b6f57fe4..d33ce4c1a 100644
--- CHANGELOG.md
+++ CHANGELOG.md
@@ -1,5 +1,19 @@
 # Changelog
 
+## 4.77.0 (2024-12-17)
+
+Full Changelog: [v4.76.3...v4.77.0](https://github.com/openai/openai-node/compare/v4.76.3...v4.77.0)
+
+### Features
+
+* **api:** new o1 and GPT-4o models + preference fine-tuning ([#1229](https://github.com/openai/openai-node/issues/1229)) ([2e872d4](https://github.com/openai/openai-node/commit/2e872d4ac3717ab8f61741efffb7a31acd798338))
+
+
+### Chores
+
+* **internal:** fix some typos ([#1227](https://github.com/openai/openai-node/issues/1227)) ([d51fcfe](https://github.com/openai/openai-node/commit/d51fcfe3a66550a684eeeb0e6f17e1d9825cdf78))
+* **internal:** spec update ([#1230](https://github.com/openai/openai-node/issues/1230)) ([ed2b61d](https://github.com/openai/openai-node/commit/ed2b61d32703b64d9f91223bc02627a607f60483))
+
 ## 4.76.3 (2024-12-13)
 
 Full Changelog: [v4.76.2...v4.76.3](https://github.com/openai/openai-node/compare/v4.76.2...v4.76.3)
diff --git api.md api.md
index 465730de8..54bcf08d7 100644
--- api.md
+++ api.md
@@ -41,6 +41,7 @@ Types:
 - <code><a href="./src/resources/chat/completions.ts">ChatCompletionContentPartInputAudio</a></code>
 - <code><a href="./src/resources/chat/completions.ts">ChatCompletionContentPartRefusal</a></code>
 - <code><a href="./src/resources/chat/completions.ts">ChatCompletionContentPartText</a></code>
+- <code><a href="./src/resources/chat/completions.ts">ChatCompletionDeveloperMessageParam</a></code>
 - <code><a href="./src/resources/chat/completions.ts">ChatCompletionFunctionCallOption</a></code>
 - <code><a href="./src/resources/chat/completions.ts">ChatCompletionFunctionMessageParam</a></code>
 - <code><a href="./src/resources/chat/completions.ts">ChatCompletionMessage</a></code>
@@ -49,6 +50,7 @@ Types:
 - <code><a href="./src/resources/chat/completions.ts">ChatCompletionModality</a></code>
 - <code><a href="./src/resources/chat/completions.ts">ChatCompletionNamedToolChoice</a></code>
 - <code><a href="./src/resources/chat/completions.ts">ChatCompletionPredictionContent</a></code>
+- <code><a href="./src/resources/chat/completions.ts">ChatCompletionReasoningEffort</a></code>
 - <code><a href="./src/resources/chat/completions.ts">ChatCompletionRole</a></code>
 - <code><a href="./src/resources/chat/completions.ts">ChatCompletionStreamOptions</a></code>
 - <code><a href="./src/resources/chat/completions.ts">ChatCompletionSystemMessageParam</a></code>
diff --git jsr.json jsr.json
index ef9ce6848..d76a2040e 100644
--- jsr.json
+++ jsr.json
@@ -1,6 +1,6 @@
 {
   "name": "@openai/openai",
-  "version": "4.76.3",
+  "version": "4.77.0",
   "exports": "./index.ts",
   "publish": {
     "exclude": [
diff --git package.json package.json
index 47f363ba1..54633aa5d 100644
--- package.json
+++ package.json
@@ -1,6 +1,6 @@
 {
   "name": "openai",
-  "version": "4.76.3",
+  "version": "4.77.0",
   "description": "The official TypeScript library for the OpenAI API",
   "author": "OpenAI <[email protected]>",
   "types": "dist/index.d.ts",
diff --git src/core.ts src/core.ts
index e1a93f272..68f1e676a 100644
--- src/core.ts
+++ src/core.ts
@@ -198,7 +198,7 @@ export abstract class APIClient {
     maxRetries = 2,
     timeout = 600000, // 10 minutes
     httpAgent,
-    fetch: overridenFetch,
+    fetch: overriddenFetch,
   }: {
     baseURL: string;
     maxRetries?: number | undefined;
@@ -211,7 +211,7 @@ export abstract class APIClient {
     this.timeout = validatePositiveInteger('timeout', timeout);
     this.httpAgent = httpAgent;
 
-    this.fetch = overridenFetch ?? fetch;
+    this.fetch = overriddenFetch ?? fetch;
   }
 
   protected authHeaders(opts: FinalRequestOptions): Headers {
diff --git src/index.ts src/index.ts
index 58d7410e4..2320850fb 100644
--- src/index.ts
+++ src/index.ts
@@ -80,6 +80,7 @@ import {
   ChatCompletionCreateParams,
   ChatCompletionCreateParamsNonStreaming,
   ChatCompletionCreateParamsStreaming,
+  ChatCompletionDeveloperMessageParam,
   ChatCompletionFunctionCallOption,
   ChatCompletionFunctionMessageParam,
   ChatCompletionMessage,
@@ -88,6 +89,7 @@ import {
   ChatCompletionModality,
   ChatCompletionNamedToolChoice,
   ChatCompletionPredictionContent,
+  ChatCompletionReasoningEffort,
   ChatCompletionRole,
   ChatCompletionStreamOptions,
   ChatCompletionSystemMessageParam,
@@ -353,6 +355,7 @@ export declare namespace OpenAI {
     type ChatCompletionContentPartInputAudio as ChatCompletionContentPartInputAudio,
     type ChatCompletionContentPartRefusal as ChatCompletionContentPartRefusal,
     type ChatCompletionContentPartText as ChatCompletionContentPartText,
+    type ChatCompletionDeveloperMessageParam as ChatCompletionDeveloperMessageParam,
     type ChatCompletionFunctionCallOption as ChatCompletionFunctionCallOption,
     type ChatCompletionFunctionMessageParam as ChatCompletionFunctionMessageParam,
     type ChatCompletionMessage as ChatCompletionMessage,
@@ -361,6 +364,7 @@ export declare namespace OpenAI {
     type ChatCompletionModality as ChatCompletionModality,
     type ChatCompletionNamedToolChoice as ChatCompletionNamedToolChoice,
     type ChatCompletionPredictionContent as ChatCompletionPredictionContent,
+    type ChatCompletionReasoningEffort as ChatCompletionReasoningEffort,
     type ChatCompletionRole as ChatCompletionRole,
     type ChatCompletionStreamOptions as ChatCompletionStreamOptions,
     type ChatCompletionSystemMessageParam as ChatCompletionSystemMessageParam,
diff --git src/resources/chat/chat.ts src/resources/chat/chat.ts
index 09cd3d123..2230b19bd 100644
--- src/resources/chat/chat.ts
+++ src/resources/chat/chat.ts
@@ -16,6 +16,7 @@ import {
   ChatCompletionCreateParams,
   ChatCompletionCreateParamsNonStreaming,
   ChatCompletionCreateParamsStreaming,
+  ChatCompletionDeveloperMessageParam,
   ChatCompletionFunctionCallOption,
   ChatCompletionFunctionMessageParam,
   ChatCompletionMessage,
@@ -24,6 +25,7 @@ import {
   ChatCompletionModality,
   ChatCompletionNamedToolChoice,
   ChatCompletionPredictionContent,
+  ChatCompletionReasoningEffort,
   ChatCompletionRole,
   ChatCompletionStreamOptions,
   ChatCompletionSystemMessageParam,
@@ -44,6 +46,8 @@ export class Chat extends APIResource {
 }
 
 export type ChatModel =
+  | 'o1'
+  | 'o1-2024-12-17'
   | 'o1-preview'
   | 'o1-preview-2024-09-12'
   | 'o1-mini'
@@ -52,10 +56,11 @@ export type ChatModel =
   | 'gpt-4o-2024-11-20'
   | 'gpt-4o-2024-08-06'
   | 'gpt-4o-2024-05-13'
-  | 'gpt-4o-realtime-preview'
-  | 'gpt-4o-realtime-preview-2024-10-01'
   | 'gpt-4o-audio-preview'
   | 'gpt-4o-audio-preview-2024-10-01'
+  | 'gpt-4o-audio-preview-2024-12-17'
+  | 'gpt-4o-mini-audio-preview'
+  | 'gpt-4o-mini-audio-preview-2024-12-17'
   | 'chatgpt-4o-latest'
   | 'gpt-4o-mini'
   | 'gpt-4o-mini-2024-07-18'
@@ -96,6 +101,7 @@ export declare namespace Chat {
     type ChatCompletionContentPartInputAudio as ChatCompletionContentPartInputAudio,
     type ChatCompletionContentPartRefusal as ChatCompletionContentPartRefusal,
     type ChatCompletionContentPartText as ChatCompletionContentPartText,
+    type ChatCompletionDeveloperMessageParam as ChatCompletionDeveloperMessageParam,
     type ChatCompletionFunctionCallOption as ChatCompletionFunctionCallOption,
     type ChatCompletionFunctionMessageParam as ChatCompletionFunctionMessageParam,
     type ChatCompletionMessage as ChatCompletionMessage,
@@ -104,6 +110,7 @@ export declare namespace Chat {
     type ChatCompletionModality as ChatCompletionModality,
     type ChatCompletionNamedToolChoice as ChatCompletionNamedToolChoice,
     type ChatCompletionPredictionContent as ChatCompletionPredictionContent,
+    type ChatCompletionReasoningEffort as ChatCompletionReasoningEffort,
     type ChatCompletionRole as ChatCompletionRole,
     type ChatCompletionStreamOptions as ChatCompletionStreamOptions,
     type ChatCompletionSystemMessageParam as ChatCompletionSystemMessageParam,
diff --git src/resources/chat/completions.ts src/resources/chat/completions.ts
index 8e9a4385e..31f5814cb 100644
--- src/resources/chat/completions.ts
+++ src/resources/chat/completions.ts
@@ -15,6 +15,12 @@ export class Completions extends APIResource {
    * [text generation](https://platform.openai.com/docs/guides/text-generation),
    * [vision](https://platform.openai.com/docs/guides/vision), and
    * [audio](https://platform.openai.com/docs/guides/audio) guides.
+   *
+   * Parameter support can differ depending on the model used to generate the
+   * response, particularly for newer reasoning models. Parameters that are only
+   * supported for reasoning models are noted below. For the current state of
+   * unsupported parameters in reasoning models,
+   * [refer to the reasoning guide](https://platform.openai.com/docs/guides/reasoning).
    */
   create(
     body: ChatCompletionCreateParamsNonStreaming,
@@ -135,6 +141,9 @@ export namespace ChatCompletion {
   }
 }
 
+/**
+ * Messages sent by the model in response to user messages.
+ */
 export interface ChatCompletionAssistantMessageParam {
   /**
    * The role of the messages author, in this case `assistant`.
@@ -530,6 +539,29 @@ export interface ChatCompletionContentPartText {
   type: 'text';
 }
 
+/**
+ * Developer-provided instructions that the model should follow, regardless of
+ * messages sent by the user. With o1 models and newer, `developer` messages
+ * replace the previous `system` messages.
+ */
+export interface ChatCompletionDeveloperMessageParam {
+  /**
+   * The contents of the developer message.
+   */
+  content: string | Array<ChatCompletionContentPartText>;
+
+  /**
+   * The role of the messages author, in this case `developer`.
+   */
+  role: 'developer';
+
+  /**
+   * An optional name for the participant. Provides the model information to
+   * differentiate between participants of the same role.
+   */
+  name?: string;
+}
+
 /**
  * Specifying a particular function via `{"name": "my_function"}` forces the model
  * to call that function.
@@ -620,7 +652,13 @@ export namespace ChatCompletionMessage {
   }
 }
 
+/**
+ * Developer-provided instructions that the model should follow, regardless of
+ * messages sent by the user. With o1 models and newer, `developer` messages
+ * replace the previous `system` messages.
+ */
 export type ChatCompletionMessageParam =
+  | ChatCompletionDeveloperMessageParam
   | ChatCompletionSystemMessageParam
   | ChatCompletionUserMessageParam
   | ChatCompletionAssistantMessageParam
@@ -707,6 +745,16 @@ export interface ChatCompletionPredictionContent {
   type: 'content';
 }
 
+/**
+ * **o1 models only**
+ *
+ * Constrains effort on reasoning for
+ * [reasoning models](https://platform.openai.com/docs/guides/reasoning). Currently
+ * supported values are `low`, `medium`, and `high`. Reducing reasoning effort can
+ * result in faster responses and fewer tokens used on reasoning in a response.
+ */
+export type ChatCompletionReasoningEffort = 'low' | 'medium' | 'high';
+
 /**
  * The role of the author of a message
  */
@@ -725,6 +773,11 @@ export interface ChatCompletionStreamOptions {
   include_usage?: boolean;
 }
 
+/**
+ * Developer-provided instructions that the model should follow, regardless of
+ * messages sent by the user. With o1 models and newer, use `developer` messages
+ * for this purpose instead.
+ */
 export interface ChatCompletionSystemMessageParam {
   /**
    * The contents of the system message.
@@ -835,6 +888,10 @@ export interface ChatCompletionToolMessageParam {
   tool_call_id: string;
 }
 
+/**
+ * Messages sent by an end user, containing prompts or additional context
+ * information.
+ */
 export interface ChatCompletionUserMessageParam {
   /**
    * The contents of the user message.
@@ -891,20 +948,22 @@ export interface ChatCompletionCreateParamsBase {
    * Number between -2.0 and 2.0. Positive values penalize new tokens based on their
    * existing frequency in the text so far, decreasing the model's likelihood to
    * repeat the same line verbatim.
-   *
-   * [See more information about frequency and presence penalties.](https://platform.openai.com/docs/guides/text-generation)
    */
   frequency_penalty?: number | null;
 
   /**
    * Deprecated in favor of `tool_choice`.
    *
-   * Controls which (if any) function is called by the model. `none` means the model
-   * will not call a function and instead generates a message. `auto` means the model
-   * can pick between generating a message or calling a function. Specifying a
-   * particular function via `{"name": "my_function"}` forces the model to call that
+   * Controls which (if any) function is called by the model.
+   *
+   * `none` means the model will not call a function and instead generates a message.
+   *
+   * `auto` means the model can pick between generating a message or calling a
    * function.
    *
+   * Specifying a particular function via `{"name": "my_function"}` forces the model
+   * to call that function.
+   *
    * `none` is the default when no functions are present. `auto` is the default if
    * functions are present.
    */
@@ -998,17 +1057,21 @@ export interface ChatCompletionCreateParamsBase {
    * Number between -2.0 and 2.0. Positive values penalize new tokens based on
    * whether they appear in the text so far, increasing the model's likelihood to
    * talk about new topics.
-   *
-   * [See more information about frequency and presence penalties.](https://platform.openai.com/docs/guides/text-generation)
    */
   presence_penalty?: number | null;
 
   /**
-   * An object specifying the format that the model must output. Compatible with
-   * [GPT-4o](https://platform.openai.com/docs/models#gpt-4o),
-   * [GPT-4o mini](https://platform.openai.com/docs/models#gpt-4o-mini),
-   * [GPT-4 Turbo](https://platform.openai.com/docs/models#gpt-4-turbo-and-gpt-4) and
-   * all GPT-3.5 Turbo models newer than `gpt-3.5-turbo-1106`.
+   * **o1 models only**
+   *
+   * Constrains effort on reasoning for
+   * [reasoning models](https://platform.openai.com/docs/guides/reasoning). Currently
+   * supported values are `low`, `medium`, and `high`. Reducing reasoning effort can
+   * result in faster responses and fewer tokens used on reasoning in a response.
+   */
+  reasoning_effort?: ChatCompletionReasoningEffort;
+
+  /**
+   * An object specifying the format that the model must output.
    *
    * Setting to `{ "type": "json_schema", "json_schema": {...} }` enables Structured
    * Outputs which ensures the model will match your supplied JSON schema. Learn more
@@ -1088,9 +1151,8 @@ export interface ChatCompletionCreateParamsBase {
   /**
    * What sampling temperature to use, between 0 and 2. Higher values like 0.8 will
    * make the output more random, while lower values like 0.2 will make it more
-   * focused and deterministic.
-   *
-   * We generally recommend altering this or `top_p` but not both.
+   * focused and deterministic. We generally recommend altering this or `top_p` but
+   * not both.
    */
   temperature?: number | null;
 
@@ -1223,6 +1285,7 @@ export declare namespace Completions {
     type ChatCompletionContentPartInputAudio as ChatCompletionContentPartInputAudio,
     type ChatCompletionContentPartRefusal as ChatCompletionContentPartRefusal,
     type ChatCompletionContentPartText as ChatCompletionContentPartText,
+    type ChatCompletionDeveloperMessageParam as ChatCompletionDeveloperMessageParam,
     type ChatCompletionFunctionCallOption as ChatCompletionFunctionCallOption,
     type ChatCompletionFunctionMessageParam as ChatCompletionFunctionMessageParam,
     type ChatCompletionMessage as ChatCompletionMessage,
@@ -1231,6 +1294,7 @@ export declare namespace Completions {
     type ChatCompletionModality as ChatCompletionModality,
     type ChatCompletionNamedToolChoice as ChatCompletionNamedToolChoice,
     type ChatCompletionPredictionContent as ChatCompletionPredictionContent,
+    type ChatCompletionReasoningEffort as ChatCompletionReasoningEffort,
     type ChatCompletionRole as ChatCompletionRole,
     type ChatCompletionStreamOptions as ChatCompletionStreamOptions,
     type ChatCompletionSystemMessageParam as ChatCompletionSystemMessageParam,
diff --git src/resources/chat/index.ts src/resources/chat/index.ts
index 262bf75a2..c3be19402 100644
--- src/resources/chat/index.ts
+++ src/resources/chat/index.ts
@@ -13,6 +13,7 @@ export {
   type ChatCompletionContentPartInputAudio,
   type ChatCompletionContentPartRefusal,
   type ChatCompletionContentPartText,
+  type ChatCompletionDeveloperMessageParam,
   type ChatCompletionFunctionCallOption,
   type ChatCompletionFunctionMessageParam,
   type ChatCompletionMessage,
@@ -21,6 +22,7 @@ export {
   type ChatCompletionModality,
   type ChatCompletionNamedToolChoice,
   type ChatCompletionPredictionContent,
+  type ChatCompletionReasoningEffort,
   type ChatCompletionRole,
   type ChatCompletionStreamOptions,
   type ChatCompletionSystemMessageParam,
diff --git src/resources/fine-tuning/jobs/jobs.ts src/resources/fine-tuning/jobs/jobs.ts
index 0c320e028..44dd011aa 100644
--- src/resources/fine-tuning/jobs/jobs.ts
+++ src/resources/fine-tuning/jobs/jobs.ts
@@ -127,9 +127,8 @@ export interface FineTuningJob {
   finished_at: number | null;
 
   /**
-   * The hyperparameters used for the fine-tuning job. See the
-   * [fine-tuning guide](https://platform.openai.com/docs/guides/fine-tuning) for
-   * more details.
+   * The hyperparameters used for the fine-tuning job. This value will only be
+   * returned when running `supervised` jobs.
    */
   hyperparameters: FineTuningJob.Hyperparameters;
 
@@ -195,6 +194,11 @@ export interface FineTuningJob {
    * A list of integrations to enable for this fine-tuning job.
    */
   integrations?: Array<FineTuningJobWandbIntegrationObject> | null;
+
+  /**
+   * The method used for fine-tuning.
+   */
+  method?: FineTuningJob.Method;
 }
 
 export namespace FineTuningJob {
@@ -221,18 +225,125 @@ export namespace FineTuningJob {
   }
 
   /**
-   * The hyperparameters used for the fine-tuning job. See the
-   * [fine-tuning guide](https://platform.openai.com/docs/guides/fine-tuning) for
-   * more details.
+   * The hyperparameters used for the fine-tuning job. This value will only be
+   * returned when running `supervised` jobs.
    */
   export interface Hyperparameters {
+    /**
+     * Number of examples in each batch. A larger batch size means that model
+     * parameters are updated less frequently, but with lower variance.
+     */
+    batch_size?: 'auto' | number;
+
+    /**
+     * Scaling factor for the learning rate. A smaller learning rate may be useful to
+     * avoid overfitting.
+     */
+    learning_rate_multiplier?: 'auto' | number;
+
     /**
      * The number of epochs to train the model for. An epoch refers to one full cycle
-     * through the training dataset. "auto" decides the optimal number of epochs based
-     * on the size of the dataset. If setting the number manually, we support any
-     * number between 1 and 50 epochs.
+     * through the training dataset.
+     */
+    n_epochs?: 'auto' | number;
+  }
+
+  /**
+   * The method used for fine-tuning.
+   */
+  export interface Method {
+    /**
+     * Configuration for the DPO fine-tuning method.
+     */
+    dpo?: Method.Dpo;
+
+    /**
+     * Configuration for the supervised fine-tuning method.
      */
-    n_epochs: 'auto' | number;
+    supervised?: Method.Supervised;
+
+    /**
+     * The type of method. Is either `supervised` or `dpo`.
+     */
+    type?: 'supervised' | 'dpo';
+  }
+
+  export namespace Method {
+    /**
+     * Configuration for the DPO fine-tuning method.
+     */
+    export interface Dpo {
+      /**
+       * The hyperparameters used for the fine-tuning job.
+       */
+      hyperparameters?: Dpo.Hyperparameters;
+    }
+
+    export namespace Dpo {
+      /**
+       * The hyperparameters used for the fine-tuning job.
+       */
+      export interface Hyperparameters {
+        /**
+         * Number of examples in each batch. A larger batch size means that model
+         * parameters are updated less frequently, but with lower variance.
+         */
+        batch_size?: 'auto' | number;
+
+        /**
+         * The beta value for the DPO method. A higher beta value will increase the weight
+         * of the penalty between the policy and reference model.
+         */
+        beta?: 'auto' | number;
+
+        /**
+         * Scaling factor for the learning rate. A smaller learning rate may be useful to
+         * avoid overfitting.
+         */
+        learning_rate_multiplier?: 'auto' | number;
+
+        /**
+         * The number of epochs to train the model for. An epoch refers to one full cycle
+         * through the training dataset.
+         */
+        n_epochs?: 'auto' | number;
+      }
+    }
+
+    /**
+     * Configuration for the supervised fine-tuning method.
+     */
+    export interface Supervised {
+      /**
+       * The hyperparameters used for the fine-tuning job.
+       */
+      hyperparameters?: Supervised.Hyperparameters;
+    }
+
+    export namespace Supervised {
+      /**
+       * The hyperparameters used for the fine-tuning job.
+       */
+      export interface Hyperparameters {
+        /**
+         * Number of examples in each batch. A larger batch size means that model
+         * parameters are updated less frequently, but with lower variance.
+         */
+        batch_size?: 'auto' | number;
+
+        /**
+         * Scaling factor for the learning rate. A smaller learning rate may be useful to
+         * avoid overfitting.
+         */
+        learning_rate_multiplier?: 'auto' | number;
+
+        /**
+         * The number of epochs to train the model for. An epoch refers to one full cycle
+         * through the training dataset.
+         */
+        n_epochs?: 'auto' | number;
+      }
+    }
   }
 }
 
@@ -240,15 +351,40 @@ export namespace FineTuningJob {
  * Fine-tuning job event object
  */
 export interface FineTuningJobEvent {
+  /**
+   * The object identifier.
+   */
   id: string;
 
+  /**
+   * The Unix timestamp (in seconds) for when the fine-tuning job was created.
+   */
   created_at: number;
 
+  /**
+   * The log level of the event.
+   */
   level: 'info' | 'warn' | 'error';
 
+  /**
+   * The message of the event.
+   */
   message: string;
 
+  /**
+   * The object type, which is always "fine_tuning.job.event".
+   */
   object: 'fine_tuning.job.event';
+
+  /**
+   * The data associated with the event.
+   */
+  data?: unknown;
+
+  /**
+   * The type of event.
+   */
+  type?: 'message' | 'metrics';
 }
 
 export type FineTuningJobIntegration = FineTuningJobWandbIntegrationObject;
@@ -318,8 +454,10 @@ export interface JobCreateParams {
    * your file with the purpose `fine-tune`.
    *
    * The contents of the file should differ depending on if the model uses the
-   * [chat](https://platform.openai.com/docs/api-reference/fine-tuning/chat-input) or
+   * [chat](https://platform.openai.com/docs/api-reference/fine-tuning/chat-input),
    * [completions](https://platform.openai.com/docs/api-reference/fine-tuning/completions-input)
+   * format, or if the fine-tuning method uses the
+   * [preference](https://platform.openai.com/docs/api-reference/fine-tuning/preference-input)
    * format.
    *
    * See the [fine-tuning guide](https://platform.openai.com/docs/guides/fine-tuning)
@@ -328,7 +466,8 @@ export interface JobCreateParams {
   training_file: string;
 
   /**
-   * The hyperparameters used for the fine-tuning job.
+   * The hyperparameters used for the fine-tuning job. This value is now deprecated
+   * in favor of `method`, and should be passed in under the `method` parameter.
    */
   hyperparameters?: JobCreateParams.Hyperparameters;
 
@@ -337,6 +476,11 @@ export interface JobCreateParams {
    */
   integrations?: Array<JobCreateParams.Integration> | null;
 
+  /**
+   * The method used for fine-tuning.
+   */
+  method?: JobCreateParams.Method;
+
   /**
    * The seed controls the reproducibility of the job. Passing in the same seed and
    * job parameters should produce the same results, but may differ in rare cases. If
@@ -372,7 +516,9 @@ export interface JobCreateParams {
 
 export namespace JobCreateParams {
   /**
-   * The hyperparameters used for the fine-tuning job.
+   * @deprecated: The hyperparameters used for the fine-tuning job. This value is now
+   * deprecated in favor of `method`, and should be passed in under the `method`
+   * parameter.
    */
   export interface Hyperparameters {
     /**
@@ -444,6 +590,104 @@ export namespace JobCreateParams {
       tags?: Array<string>;
     }
   }
+
+  /**
+   * The method used for fine-tuning.
+   */
+  export interface Method {
+    /**
+     * Configuration for the DPO fine-tuning method.
+     */
+    dpo?: Method.Dpo;
+
+    /**
+     * Configuration for the supervised fine-tuning method.
+     */
+    supervised?: Method.Supervised;
+
+    /**
+     * The type of method. Is either `supervised` or `dpo`.
+     */
+    type?: 'supervised' | 'dpo';
+  }
+
+  export namespace Method {
+    /**
+     * Configuration for the DPO fine-tuning method.
+     */
+    export interface Dpo {
+      /**
+       * The hyperparameters used for the fine-tuning job.
+       */
+      hyperparameters?: Dpo.Hyperparameters;
+    }
+
+    export namespace Dpo {
+      /**
+       * The hyperparameters used for the fine-tuning job.
+       */
+      export interface Hyperparameters {
+        /**
+         * Number of examples in each batch. A larger batch size means that model
+         * parameters are updated less frequently, but with lower variance.
+         */
+        batch_size?: 'auto' | number;
+
+        /**
+         * The beta value for the DPO method. A higher beta value will increase the weight
+         * of the penalty between the policy and reference model.
+         */
+        beta?: 'auto' | number;
+
+        /**
+         * Scaling factor for the learning rate. A smaller learning rate may be useful to
+         * avoid overfitting.
+         */
+        learning_rate_multiplier?: 'auto' | number;
+
+        /**
+         * The number of epochs to train the model for. An epoch refers to one full cycle
+         * through the training dataset.
+         */
+        n_epochs?: 'auto' | number;
+      }
+    }
+
+    /**
+     * Configuration for the supervised fine-tuning method.
+     */
+    export interface Supervised {
+      /**
+       * The hyperparameters used for the fine-tuning job.
+       */
+      hyperparameters?: Supervised.Hyperparameters;
+    }
+
+    export namespace Supervised {
+      /**
+       * The hyperparameters used for the fine-tuning job.
+       */
+      export interface Hyperparameters {
+        /**
+         * Number of examples in each batch. A larger batch size means that model
+         * parameters are updated less frequently, but with lower variance.
+         */
+        batch_size?: 'auto' | number;
+
+        /**
+         * Scaling factor for the learning rate. A smaller learning rate may be useful to
+         * avoid overfitting.
+         */
+        learning_rate_multiplier?: 'auto' | number;
+
+        /**
+         * The number of epochs to train the model for. An epoch refers to one full cycle
+         * through the training dataset.
+         */
+        n_epochs?: 'auto' | number;
+      }
+    }
+  }
 }
 
 export interface JobListParams extends CursorPageParams {}
diff --git src/version.ts src/version.ts
index 01cd56405..fdf4e5224 100644
--- src/version.ts
+++ src/version.ts
@@ -1 +1 @@
-export const VERSION = '4.76.3'; // x-release-please-version
+export const VERSION = '4.77.0'; // x-release-please-version
diff --git tests/api-resources/chat/completions.test.ts tests/api-resources/chat/completions.test.ts
index 5dcbf9ad6..dfc09f69b 100644
--- tests/api-resources/chat/completions.test.ts
+++ tests/api-resources/chat/completions.test.ts
@@ -11,7 +11,7 @@ const client = new OpenAI({
 describe('resource completions', () => {
   test('create: only required params', async () => {
     const responsePromise = client.chat.completions.create({
-      messages: [{ content: 'string', role: 'system' }],
+      messages: [{ content: 'string', role: 'developer' }],
       model: 'gpt-4o',
     });
     const rawResponse = await responsePromise.asResponse();
@@ -25,7 +25,7 @@ describe('resource completions', () => {
 
   test('create: required and optional params', async () => {
     const response = await client.chat.completions.create({
-      messages: [{ content: 'string', role: 'system', name: 'name' }],
+      messages: [{ content: 'string', role: 'developer', name: 'name' }],
       model: 'gpt-4o',
       audio: { format: 'wav', voice: 'alloy' },
       frequency_penalty: -2,
@@ -41,6 +41,7 @@ describe('resource completions', () => {
       parallel_tool_calls: true,
       prediction: { content: 'string', type: 'content' },
       presence_penalty: -2,
+      reasoning_effort: 'low',
       response_format: { type: 'text' },
       seed: -9007199254740991,
       service_tier: 'auto',
diff --git tests/api-resources/fine-tuning/jobs/jobs.test.ts tests/api-resources/fine-tuning/jobs/jobs.test.ts
index 0ab09768a..4de83a8b7 100644
--- tests/api-resources/fine-tuning/jobs/jobs.test.ts
+++ tests/api-resources/fine-tuning/jobs/jobs.test.ts
@@ -34,6 +34,20 @@ describe('resource jobs', () => {
           wandb: { project: 'my-wandb-project', entity: 'entity', name: 'name', tags: ['custom-tag'] },
         },
       ],
+      method: {
+        dpo: {
+          hyperparameters: {
+            batch_size: 'auto',
+            beta: 'auto',
+            learning_rate_multiplier: 'auto',
+            n_epochs: 'auto',
+          },
+        },
+        supervised: {
+          hyperparameters: { batch_size: 'auto', learning_rate_multiplier: 'auto', n_epochs: 'auto' },
+        },
+        type: 'supervised',
+      },
       seed: 42,
       suffix: 'x',
       validation_file: 'file-abc123',
diff --git tests/index.test.ts tests/index.test.ts
index f39571121..bf113e7bb 100644
--- tests/index.test.ts
+++ tests/index.test.ts
@@ -177,7 +177,7 @@ describe('instantiate client', () => {
     expect(client.apiKey).toBe('My API Key');
   });
 
-  test('with overriden environment variable arguments', () => {
+  test('with overridden environment variable arguments', () => {
     // set options via env var
     process.env['OPENAI_API_KEY'] = 'another My API Key';
     const client = new OpenAI({ apiKey: 'My API Key' });

Description

This PR updates the OpenAI Node.js library from version 4.76.3 to 4.77.0. It introduces new models, updates existing APIs, and adds support for new features, particularly in the fine-tuning and chat completion areas.

Changes

Changes

  1. package.json and related files:

    • Version bump from 4.76.3 to 4.77.0
  2. src/resources/chat/chat.ts:

    • Added new chat models: 'o1', 'o1-2024-12-17', 'gpt-4o-audio-preview-2024-12-17', 'gpt-4o-mini-audio-preview', 'gpt-4o-mini-audio-preview-2024-12-17'
    • Removed 'gpt-4o-realtime-preview' and 'gpt-4o-realtime-preview-2024-10-01'
  3. src/resources/chat/completions.ts:

    • Added ChatCompletionDeveloperMessageParam interface for developer-provided instructions
    • Added ChatCompletionReasoningEffort type for constraining reasoning effort in o1 models
    • Updated documentation for various parameters and interfaces
  4. src/resources/fine-tuning/jobs/jobs.ts:

    • Added support for new fine-tuning methods: 'supervised' and 'dpo'
    • Updated FineTuningJob and JobCreateParams interfaces to include new method-related fields
  5. Test files:

    • Updated tests to reflect new API changes and parameters
sequenceDiagram
    participant Client
    participant OpenAI
    participant Chat
    participant FineTuning

    Client->>OpenAI: Initialize with new version
    Client->>Chat: Create chat completion
    Chat-->>Client: Return chat completion
    Client->>FineTuning: Create fine-tuning job
    FineTuning-->>Client: Return fine-tuning job
    Client->>Chat: Use new o1 model
    Chat-->>Client: Return o1 model response
Loading

Possible Issues

  • The removal of 'gpt-4o-realtime-preview' models might break existing implementations that rely on these specific models.
  • The introduction of new fine-tuning methods and parameters may require updates to existing fine-tuning workflows.

Security Hotspots

No significant security issues were identified in this update.

This sequence diagram illustrates the basic flow of using the updated OpenAI library, including creating chat completions, fine-tuning jobs, and using the new o1 model.

@thypon thypon merged commit 327ed63 into main Jan 6, 2025
8 checks passed
@thypon thypon deleted the renovate/openai-4.x-lockfile branch January 6, 2025 21:02
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging this pull request may close these issues.

1 participant