News
        
        Microsoft Explains Azure OpenAI Servicing and Privacy
        
        
        
			- By Kurt Mackie
- June 30, 2023
Microsoft explained how Azure OpenAI large language model servicing works, adding assurances about privacy, too, in a Friday announcement. 
Microsoft uses large language models created by the generative artificial  intelligence (AI) company OpenAI in its Azure OpenAI service. These models  get updated and  have version numbers. Newly added to the Azure OpenAI service are 0613-version updates of the GPT 3.5 and GPT 4 models, Microsoft announced. 
Older large language model versions, such as GPT 3.5-Turbo  version 0301 and GPT 4 version 0314, are set to "expire no earlier than  October 15th, 2023," per Microsoft's "Azure  OpenAI Service Models document."
GPT stands for "Generative Pre-Trained Transformers."  Both GPT 3.5 and GPT 4 can "understand" user text prompts and  generate "natural language" responses to those prompts. GPT 4 is additionally  capable of generating code, too, although Azure OpenAI also supports Codex  models that are specifically designed for generating code from text prompts. 
More nuances about the many available models in the Azure  OpenAI service can be found in Microsoft's "Service Models" document."
Some price reductions also were announced, too, relative to Microsoft's somewhat complex "pay-as-you-go" pricing  model for Azure OpenAI.
New GPT Models
Microsoft specifically added new GPT 3.5 Turbo and GPT 3.5 Turbo-16k models to its Azure OpenAI service, plus new GPT 4 and GPT 4-32k models, all of which are at version 0613.
GPT 4 is the most accurate model so far, but its use isn't readily available to the public. GPT 4 use in the Azure OpenAI service is just available by request, with prospective users needing to fill  out this form.
Automatic updates to these new 0613 versions of GPT 3.5 and  GPT 4 will arrive "in two weeks," Microsoft indicated.
Given that new versions of GPT 3.5 and GPT 4 will be  arriving, the older versions will expire and an automatic upgrade will be  triggered. If organizations don't want such automatic upgrades to occur, then  they will have to "set the model upgrade option to expire through the API,"  although Microsoft won't be publishing guidance on how to do that until "September  1." 
Controlling Model Updates
Organizations using the Azure OpenAI service have some  control over the model updates. 
It's possible to "pin" a particular model version,  or organizations can use Microsoft's auto-update default setting, which automatically applies the updates. These settings  can be managed using Azure AI Studio, which also has a function that will list the  deprecation dates for particular model versions. 
The auto-update capability is just available for "select  model deployments," Microsoft's "Service Models" document  explained. While Microsoft generally recommended using the default setting,  organizations should opt to choose when upgrades should occur when using  "embeddings."
An "embedding" is a special data format for functionalities  -- namely, Microsoft has "similarity, text search and code search" embeddings.  This data format is said to be "easily utilized by machine learning models  and algorithms," per the "Service Models" document.
Azure OpenAI Use and Privacy
Microsoft repeated privacy assurances regarding the use of Azure  OpenAI services in its Friday announcement. It doesn't use customer-supplied  data to improve Microsoft products or fine tune its models, and it doesn't disclose  customer data to "third parties." 
However, the prompts submitted by Azure OpenAI users are  still subject to Microsoft's "abuse  monitoring" process, which includes human reviews by Microsoft  employees when a prompt or AI-generated completion content gets flagged.  
Microsoft additionally stores such customer-generated information  for 30 days, which is done to detect and address abuses. Organizations that  don't want such oversight from Microsoft have to fill out a form to request an  exemption from Microsoft's abuse monitoring process. 
Here's how Microsoft explained the storing of customer-supplied  information for abuse monitoring, per its "Data,  Privacy and Security for Azure OpenAI Service" document:
  To detect and mitigate abuse,  Azure OpenAI stores all prompts and generated content securely for up to thirty  (30) days. (No prompts or completions are stored if the customer is approved  for and elects to configure abuse monitoring off, as described below.)
The document further explained that the models themselves  are stateless, meaning that "no prompts or generations are stored in the  model."
Microsoft's assurances on Azure OpenAI and privacy seem to  be plainly stated, although other messaging, such as this  announcement that organizations are going to need zero trust security to  use AI, offer some doubts. Microsoft also recently  floated the idea that data governance tools may be needed to use emerging  AI tools such as Microsoft 365 Copilot, which is now at the limited private preview  stage.
        
        
        
        
        
        
        
        
        
        
        
        
            
        
        
                
                    About the Author
                    
                
                    
                    Kurt Mackie is senior news producer for 1105 Media's Converge360 group.