I have a few Jekyll websites for smaller open source projects, and wanted to write a local Node.js script, that leverages Azure OpenAI to effectively provide editorial updates to the contents of my web pages (as MD files).
What i want, is to defined prompts for various content elements of the pag that describe what i want done to them (ex psuedo prompt: Here is the page title, optimize for SEO, and limit to 60 characters: “Learn how to get started with development on X”. only return the updated title and nothing else.).
I’ve done this, and sent them off to Azure OpeAI APIs, using the gpt-350-turbo model and the Azure completions API, and I get responses that are close, but they have random “junk” content in there as well (even tho i explicitly specify NOT to in the prompt).
For example, i might get back someting like:
"choices": [
{
"text": "Example response (correct):Get started developing your solution using XnnnnFor the given",
"index": 0,
"finish_reason": "length",
"logprobs": null
}
As you can see, it puts “junk” like Example response (correct):
in front and sometimes weird characters or fragments of “further responses” (like For the given
) below.
I assume i’m overlooking something simple; Am i using the wrong model? Is gpt too conversational for simple prompt -> direct output? Is completions the right API?