Xamarin.Forms PCL - Can't get Computer Vision API to work in iOS app

faceoffers28faceoffers28 USUniversity ✭✭✭
edited May 2017 in Xamarin.Forms

I'm running this version of the Xamarin SDKs. https://releases.xamarin.com/stable-release-cycle-9/ My app is a PCL (profile 259). I'm running Xamarin.Forms 2.3.4.231. All the HttpClient code is in a PCL (profile 7) SDK. The SDK services all my HttpClient calls and everything seems to be working just fine.

I can't get the Project Oxford package to work in my iOS, Android or UWP app, so I decided to write my own HttpClient code. By the way, the Project Oxford package never returns after the following lines of code: text = await client.RecognizeTextAsync(imageURL);
text = await client.RecognizeTextAsync(imageURL, languageCode: "en", detectOrientation: true);

For now, I am uploading an image to Blob storage and then I want to pass the imageUrl to the Computer Vision API using PostAsync. This code works in my Android and UWP app, but I can't get it to work in my iOS app (Bad Request error).

Here is the code that works in my Android and UWP app.

public static async Task ProcessReceiptAsync(string imageUrl)
{
// Instantiate a HTTP Client
var client = new HttpClient();
var apiKey = "KEY 1";
// Request parameters and URI
string requestParameters = "language=en&detectOrientation =true";
string uri = "https://eastus2.api.cognitive.microsoft.com/vision/v1.0/ocr?" + requestParameters;
// Pass subscription key thru the HTTP Request Header
client.DefaultRequestHeaders.Add("Ocp-Apim-Subscription-Key", apiKey);
// Format Request body
byte[] byteData = Encoding.UTF8.GetBytes($"{{\"url\": \"{imageUrl}\"}}");
using (var content = new ByteArrayContent(byteData))
{
// Specify Request body Content-Type
content.Headers.ContentType = new MediaTypeHeaderValue("application/json");
// Send Post Request
HttpResponseMessage response = await client.PostAsync(uri, content);
// Read Response body into the model
//return await response.Content.ReadAsStringAsync(OcrResults);
var result = Mapper.MapFromJson(await response.Content.ReadAsStringAsync());
return result;
}
}

I wasn't able to get the ExtractTextFromImageUrlAsync method (see url below) working in my iOS app (Bad Request error), but I did get a "file too large" error using the ExtractTextFromImageStreamAsync (see url below) method.
https://github.com/HoussemDellai/Microsoft-Cognitive-Services-API/blob/master/ComputerVisionApplication/ComputerVisionApplication/Services/ComputerVisionService.cs

At this stage, I believe it has something to do with how the iOS app is converting the imageUrl for transport over HTTP. I've tried these url and imageURL combinations. I've tried converting to byte[] as well as using StringContent.

string body = "{\"url\":\"" + imageUrl + "\"}";
string body2 = $"{{\"url\": \"{imageUrl}\"}}";
string body3 = @{"url"":""" + imageUrl + @"""}";

Any help is much appreciated. Thanks!

Sign In or Register to comment.