And that growth has propelled OpenAI itself into becoming one of the most-hyped companies in recent memory, even if CEO and co-founder Sam Altman’s firing and swift return raised concerns about its direction and opened the door for competitors. What started as a tool to hyper-charge productivity through writing essays and code with short text prompts has evolved into a behemoth used by more than 92% of Fortune 500 companies for more wide-ranging needs. These use cases are enabled by new API parameters in our /v1/chat/completions endpoint, functions and function_call, that allow developers to describe functions to the model via JSON Schema, and optionally ask it to call a specific function.ChatGPT, OpenAI’s text-generating AI chatbot, has taken the world by storm. place last month?” to a SQL query using sql_query(query: string).ĭefine a function called extract_people_data(people: ), to extract all people mentioned in a Wikipedia article. Convert natural language into API calls or database queriesĬonvert “Who are my top ten customers this month?” to an internal API call such as get_customers_by_revenue(start_date: string, end_date: string, limit: int), or “How many orders did Acme, Inc.Create chatbots that answer questions by calling external tools (e.g., like ChatGPT Plugins)Ĭonvert queries such as “Email Anya to see if she wants to get coffee next Friday” to a function call like send_email(to: string, body: string), or “What’s the weather like in Boston?” to get_current_weather(location: string, unit: 'celsius' | 'fahrenheit').Function calling allows developers to more reliably get structured data back from the model. These models have been fine-tuned to both detect when a function needs to be called (depending on the user’s input) and to respond with JSON that adheres to the function signature. This is a new way to more reliably connect GPT's capabilities with external tools and APIs. We are working on ways to give developers more stability and visibility into how we release and deprecate models.ĭevelopers can now describe functions to gpt-4-0613 and gpt-3.5-turbo-0613, and have the model intelligently choose to output a JSON object containing arguments to call those functions. We understand that model upgrades and behavior changes can be disruptive to your applications. One way to help us ensure new models get better at domains you care about, is to contribute to the OpenAI Evals library to report shortcomings in our models. That said, our evaluation methodology isn’t perfect, and we’re constantly improving it. We are working hard to ensure that new versions result in improvements across a comprehensive range of tasks. Each individually pinned model is stable, meaning that we won’t make changes that impact the outputs. For example, you can use gpt-4-0314 instead of the generic gpt-4, which points to the latest model version. This is why we allow API users to pin the model version. While the majority of metrics have improved, there may be some tasks where the performance gets worse. We look at a large number of evaluation metrics to determine if a new model should be released. For instance, the gpt-4-0613 model introduced last month resulted in significant improvement on calling functions. We are targeting improvements on a large number of axes, such as instruction following, factual accuracy, and refusal behavior. When we release new model versions, our top priority is to make newer models smarter across the board. After reviewing feedback from customers and our community, we are extending support for those models until at least June 13, 2024. We previously communicated to developers that gpt-3.5-turbo-0301, gpt-4-0314 and gpt- models were scheduled for sunset on Sept 13, 2023.
0 Comments
Leave a Reply. |
AuthorWrite something about yourself. No need to be fancy, just an overview. ArchivesCategories |