Replies: 2 comments
-
|
Looking for an update on this as well. I know that passing in gpt-5 models will use responses but I don't want to use this model due to reasoning time. |
Beta Was this translation helpful? Give feedback.
0 replies
-
|
This discussion was automatically locked because it has not been updated in over 30 days. If you still have questions about this topic, please ask us at community.vercel.com/ai-sdk |
Beta Was this translation helpful? Give feedback.
0 replies
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Uh oh!
There was an error while loading. Please reload this page.
Uh oh!
There was an error while loading. Please reload this page.
-
Hi everyone!
I'm using an OpenAI Compatible Provider to communicate via Vercel AI SDK with a self-hosted gpt-oss-120b.
It seems that the SDK sends all requests via the
/completionsAPI rather than the newer/responsesAPI which is recommended by OpenAI for all of its newer models (see also here), including gpt-oss.Is there some way to tell the OpenAI Compatible Provider to use the newer API? I couldn't find any discussions or issues around this topic but also find it hard to imagine that I'm the only one wondering about this.
Thank you for any pointers!
Beta Was this translation helpful? Give feedback.
All reactions