Forum General

Announcement:

The Xamarin Forums have officially moved to the new Microsoft Q&A experience. Microsoft Q&A is the home for technical questions and answers at across all products at Microsoft now including Xamarin!

To create new threads and ask questions head over to Microsoft Q&A for .NET and get involved today.

App - Server calls: what is the best way to secure them?

NorbertKNorbertK PLMember

Hi

Let's talk about mobile app client calls to service web application. What is the best approach, to make sure, that:

  • Request, that web application is handling indeed comes from my app instance, not from someone trying to deceive my system
  • I can identify id of client app that is the author of the request (btw I am using phone number as user identifier, like WhatsApp)
  • I am save from other dangers that I haven't thought about

My current approach is to:

  1. Give avery user unique id that is stored in Shared Preferences for Android and Keychain for iOs
  2. Hardcode key both in client app and web application
  3. Use RijndaelManaged cryptography class to encrypt every request/response using hardcoded key

Call me paranoid, but I want to make sure that, for example, it is impossible for anyone to get my key if it is hardcoded in app. Also this way doesn't cover the scenario when somehow key is leaked and I have to change it. I'd love to get a second opinion from someone more experienced about this.

Posts

  • DavidDancyDavidDancy AUMember ✭✭✭✭

    It's not possible to completely protect anything stored in the app, especially on Android. This means that there is nothing you can do to completely prevent data that is hard-coded in your app from being discovered.

    All you can do is make it more expensive (in time as well as money) than it's worth for any potential unauthorised entity.

    Having things in the KeyChain/KeyStore is a good idea, but this does not protect you from Jailbreaks where an unauthorised user can simply read values in memory (i.e. before they get encrypted or after they are decrypted).

    IMO using a phone number as a user id is not a good idea. If your user moves to a new town, or changes mobile phone provider, the phone number is then useless. Much better is to use something like an email address or some other genuinely unique number / string. You then also need a separate email address associated with the user in your back-end database so you can send password reset requests to it.

    One aspect of using encryption that you might need to be aware of is that if you use anything not provided by the OS you may need to jump through the (many) hoops of US export regulations regarding the encryption technology you have implemented. All apps in the various stores (regardless of their sales country) are hosted in the USA and so the encryption technology export rules apply to apps sold in any country. This includes free apps as well.

    For this reason we took the view that it's way simpler (and not demonstrably less secure) to store important things in the KeyChain/KeyStore (where they are encrypted by the OS), unimportant things in NSUserDefaults/PreferenceManager, and rely on HTTPS for transport security. For larger-but-important things we use encrypted SQLite, but we do not provide the encryption ourselves. However we recognise that both KeyStore and encrypted SQLite require a passcode that will have to be embedded in the app, so they are both inherently insecure.

    We pin the server's certificate(s) to our app for an extra level of comfort, but recognise that even this can be hacked if the device is jailbroken. We therefore also try to detect jailbreaks and refuse to run on a jailbroken device. However the point of a good jailbreak is that it's not detectable by an app so this is also not reliable.

    As a final level of defence in the app we refuse to run if we can detect that the device is an emulator, or if the app is in Debug mode, or if the app's signature has changed (i.e. it's been signed with a different certificate).

    These are all measures that can be circumvented by a determined hacker, but all of them together put the amount of expertise needed to gain unauthorised access up at a level where we deem ourselves "secure enough". We're only really defending ourselves against "drive-by" style of hacking here. Anyone really determined is going to have tools that bypass all these app-based defences.

    This is the same kind of security that you apply to your house. We put a simple Yale-type lock on our front door because it's "secure enough". It stops casual burglaries, but we all know that anyone determined enough is either going to pick the lock or just bash the door down and there's nothing we can do to prevent it. Putting passcodes and certificate ids into the app is just like leaving the key to the front door of your house under the flowerpot at the top of the driveway, and has exactly the same level of security.

    In the end the only thing we can rely on to know the user's identity is that the user correctly enters their credentials, whether via password or PIN number. Any other information embedded in the app can be easily compromised, so we don't trust it.

    On the API side we try to make sure that each exposed API has a very specific purpose and narrow scope. So we don't have API's that return (e.g.) all user id's, all account codes, etc. Instead we require the app to make very specific API calls that only retrieve or change the logged-in user's data. We also don't have any User Ids available to the APIs that can access multiple users' data (think admin-level access). By this we hope to secure the data in the back end against attacks that compromise lots of data at once.

    Other useful measures include things like rate-limiting API calls (stops a bot from harvesting lots of data quickly) and implementing OAuth for authentication (like having 2 front doors on your house).

    Our APIs are written with .NET so we can also do things like adding security attributes to the methods that service the API calls. One common problem also is to make sure that the APIs return only JSON and HTTP error codes correctly and not HTML - which can happen if the API is built into an existing web server, instead of being a dedicated API server in its own right, because the error-handling in the web server might try to serve up the standard error page for the site instead of the API response.

    That's all I can think of for now. I'm sure others more knowledgeable in API-building could add more.

  • HunumanHunuman GBMember ✭✭✭✭

    @DavidDancy said:
    One aspect of using encryption that you might need to be aware of is that if you use anything not provided by the OS you may need to jump through the (many) hoops of US export regulations regarding the encryption technology you have implemented. All apps in the various stores (regardless of their sales country) are hosted in the USA and so the encryption technology export rules apply to apps sold in any country. This includes free apps as well.

    Hi @DavidDancy

    Excellent stuff, thank you.

    I was particularly interested by your insight quoted above.

    Would I be correct in assuming System.Security.Cryptography has no problem in this area?
    Also wondering about popular encryption nugets like Bouncy Castle, used in Portable.Licensing component....

    Tim

  • DavidDancyDavidDancy AUMember ✭✭✭✭

    @Hunuman I think you should get specific advice for your exact situation. We had to consult our company (technical) legal counsel and get an actual legal opinion. A non-lawyerly opinion might venture that if it's provided by some recognised part of the system or a well-known and extensively-used library, you will probably have no issues. But if the consequences matter to you you should probably get a professional opinion from someone who's prepared to help you out if you're challenged on it.

    Sorry I can't be more specific, but this is really more of a legal issue than a technical one. Everyone except the people who make the restriction laws realises that the encryption technology they are trying to protect is freely available to the whole world regardless of their restrictions. These are legal hoops we have to jump through in order to tick some boxes on a form to satisfy a process. Whether you need to do that varies enormously with the exact technology you have used for your encryption.

    Our lawyer felt that anything provided by the operating system is exempt from the export restrictions since it's built in to the device. Other libraries used by an application need to be evaluated on their own merits, preferably by someone with experience who knows what they're doing. :smile:

  • HunumanHunuman GBMember ✭✭✭✭

    @DavidDancy

    Thanks for the "heads up" David, I appreciate it.
    Luckily our implementation uses the platform's encryption services, so hopefully not a issue, but have passed it on to our legal team just in case.

    Cheers,

    Tim

  • GuyProvostGuyProvost CAMember ✭✭✭

    @NorbertK said:
    Hi

    Let's talk about mobile app client calls to service web application. What is the best approach, to make sure, that:

    • Request, that web application is handling indeed comes from my app instance, not from someone trying to deceive my system
    • I can identify id of client app that is the author of the request (btw I am using phone number as user identifier, like WhatsApp)
    • I am save from other dangers that I haven't thought about

    My current approach is to:

    1. Give avery user unique id that is stored in Shared Preferences for Android and Keychain for iOs
    2. Hardcode key both in client app and web application
    3. Use RijndaelManaged cryptography class to encrypt every request/response using hardcoded key

    Call me paranoid, but I want to make sure that, for example, it is impossible for anyone to get my key if it is hardcoded in app. Also this way doesn't cover the scenario when somehow key is leaked and I have to change it. I'd love to get a second opinion from someone more experienced about this.

    @DavidDancy wrote some really solid stuff to learn about, thanks!

    But on a more broad scenario, any legitimate user of your app will always have the option to use a tool like Telerik Fiddler to sniff the communication between the device and the backend, even if it is encrypted, and try to replay it anyway he/she deems suitable using the tools he/she wants to! Your always better to just make sure the user himself/herself is allowed to use the backend and not hope to control the device itself by using proper authentication!

    The user owns the device in every sense of the word. He can do whatever he wants with it, to tinker with it with all his might!

    Better to just make sure any data that leaves the device is actually in the sole context of the user of the device and that he can't get any "value" from stealing the data being communicated.

    You would have a rough time trying to make sure that "only" your app can communicate with the backend. The user could always try to replay the communication being transfered.

Sign In or Register to comment.