Back to Blog Home
← all posts

Mobility Takeaways from Google I/O Conference

May 17, 2018 — by Jen Looper and Veselina Radeva

Over the past week, Jen and Veselina attended Google I/O Conference, also known as “Coachella for Nerds” - a giant conference for all things Google and all Google fans.

As a member of Women TechMakers and a new Web GDE (Google Developer Expert), I was happy to see many colleagues that I normally only see online, including a cherished colleague that I’ve worked with for at least seven years but never met! What a great opportunity to meet and catch up. --Jen

Vessy and Jen hit I/O!

Other than all the social events that happened during this conference, including a massive electronic music concert and light show, attendees were treated to a large array of talks, all of which are available to view on YouTube. Over two days, we enjoyed several keynotes, panels, introduction to new products, and deep-dives into mature products and mobile and web technologies, office hours, codelabs, and sandboxes where you could try new things. There were also ‘inspirational’ talks, a new angle for I/O this year that attempted to bring new voices to the stage and expand from discussion of products.

For many women attendees, the event started with a fun Women TechMakers dinner with great food, interesting cocktails, long conga lines, and a considerable amount of dancing with new friends. Some of the best casual discussions to be had had were with women at the conference, who were always willing to chat during lunch.

Keynotes: Let’s talk about the Keynotes. These are always interesting, and half the fun is watching your Twitter feed for the reactions to the things being shown on the realtime stream. There’s a major keynote at the beginning of the conference, then a Developer keynote, then several smaller, more focused keynotes. Amongst other panel-style keynotes, we attended:

Google keynote

  • Google’s commitment to “AI for Everyone” included a moving accessibility demo, in which a paraplegic helped Google develop a system using Morse Code to communicate via taps, rather than a keyboard. More worrisome, in some eyes, was the introduction of “Duplex”, Google’s conversational AI that has the ability to pass a Turing test in mimicry of human speech. The demo involved the AI calling a restaurant to get dinner reservations even in the face of misunderstanding and ESL problems. Twitter frothed with incredulity and discomfort:

  • Developer keynote

    Less “smoke and mirrors” in feel, the Developer Keynote featured a solid parade of Product managers showing off their latest and greatest.

    • Stephanie Cuthbertson talked about the three major goals for Android: easier distribution, faster performance, and better engagement. Specifically, new publishing formats like the new Android App Bundle are being rolled out to help make apps smaller and thus drive installs. Examples included a new method to dynamically deliver only those assets in /res that are needed by a particular device, and a way to ‘try before you buy’ promised to drive installations in Google Play.

      Android Jetpack, a large suite of tools and libraries to manage your app’s architecture, was featured, in particular ‘slices’, elements of your app that can be used externally by other apps.

      Big applause went to a demo of the new super fast Android emulator available in Android Studio, as well as an energy profiler and an ADB connection assistant.

      Finally, templates to help build Android Slices were shown in Android Studio with a nice interface to build them easily.

    • Brad Abrams talked about conversational computing using Dialogflow, a nice way to build bot interfaces leveraging natural language processing. Working with Google Assistant just got that much easier.

    • Tal Oppeheimer Discussed PWAs, noting that Google.com itself is a PWA, and showing how Chrome OS enables PWAs out of the box. The biggest applause came when she stated that Chromebooks’ adoption has grown 50% over 2017 and that with Chrome OS you can now run Linux on your Chromebook!

    • Jia Li, working with AI, talked about the third-generation TPUs that are enabling powerful and fast machine learning for TensorFlow. You can build your own AI in the cloud, in fact, using Cloud Auto ML, as long as you provide the data. Use cloud architecture to avoid the complex Dev Ops of Machine Learning. The latest and greatest in the world of TensorFlow is Tensorflow.js, a JavaScript implementation of the TF APIs, TensorFlow Lite (for mobile and embedded devices), and ML Kit, all of which had several sessions later in the conference giving deeper dives.

    • Francis Ma talked about new integrations in Firebase, including Crashlytics, Fabric, and Google Analytics, to make analyzing your apps’ performance easier. But most exciting is the public beta of ML Kit, allowing you to use the power of machine learning on device or in the cloud.

    • Nathan Martz discussed advances in Augmented Reality, including the recent release of AR Core, Google’s platform for building AR experiences. Sharable cloud anchors are part of a major update to AR Core that enable collaborative AR experiences, cross platform. Try the app “Just a Line” on Android to see it in action.

    For the mobile developer, there were many exciting sessions mostly focusing on the developer release of Android P with some sessions on Flutter, Google’s new framework for building cross-platform mobile apps using Dart. If you are interested in taking a look their codelab is quite good: https://codelabs.developers.google.com/codelabs/flutter/#0.

    For the NativeScript developer, the sessions of particular interest included the Android ones, and everything on Machine Learning, which was a huge area of focus at this year’s I/O. 

    Android

    Android was definitely one of the main topics at I/O. If you're an Android Developer there is a bunch of new staff available for you to try out. Let's take a closer look at what's new in Apps and Tools.

    Apps

    Chet Haase, Dan Sandler and Romain Guy did a wonderful session about "What's New in Android". Below are the improvements they talked about:

    • No doubt one of the major announcements for Android was Android bundle that lets you package and upload your app in a way that the Play Store can create many different versions of the app based on the architecture, screen size, locale of the phone where the app is installed. It also gives the developers the opportunity to create dynamic features to be downloaded on demand.
    • Nice new app features were introduced including Slices, Actions and Rich Notifications. Slices display rich UI app content in another app or web. They are part of Android Jetpack and therefore although it’s a new feature it is is backwards-compatible (API 19+). Actions are deep links to your app with some additional payload like playing a specific album in Spotify for example. Both samples and actions can be registered with App indexing to allow on-device search. Notifications are now richer, supporting message images, user images and smart reply.
    • In addition to all the improvements of the existing application components, the new Paging library for async data paging is now 1.0, the WorkManager library is currently a preview but will be there for you soon to handle all the complexity around job scheduling.
    • Android Jetpack is now here; it’s a set of libraries, tools and architectural guidance to help make it quick and easy to build great Android apps.
    • The deprecation policy is now something to which all apps and native component should comply. All new apps need to target API 26 after August 2018 and all new app updates need to do the same after Nov 2019. Any native components need to target 64-bit architecture till after Aug 2019.
    • In the world of app performance analytics Android Vitals comes with more insights like app startup times thresholds, ANR rate (e.g. the Application Not Responding rate), crash rate, excessive wakeups and slow/frozen frames. Android Vitals are now available in the Play Console.
    Tools

    Tor Norbye and Jamal Eason presented the new cool features in Android Development Tools. To mention some of them:

    • In Android studio there are environment enhancements like build speed improvements, faster emulator boot time, and SQL code editing support and Kotlin lint checks. In the world of debugging and profiling there are multiple nice new features like method tracing in the CPU profiler, thread view, requests and responses in the Network profiler, JNI references and allocation trace in the Memory profiler and the new energy profiler to track how your application affects the battery life.
    • Navigation is now made easier with the new Navigation Editor - a visual tool for managing navigation between screens, set the home screen and define deep links.
    • Refactoring of Android support library to androidx.*. So no more versions in the namespace! It comes with tools for automated refactoring which affects java, xml and gradle files.
    • To enhance the Android emulator even more, the camera and emulator sensors can now display a 3D scene to support AR apps. Also it’s now possible to create emulator snapshots and screen recording with video and audio added by popular demand.
    • Emulator snapshots and screen recording are now available as some of the most requested features.
    • There is now better test support, lint support and R8 optimizations for Kotlin.
    • Last but not least, a sneak preview of the animations made easier was done.

    Machine Learning

    If you like Machine Learning, boy were you in luck at I/O. The talks were varied, high quality, and the topic in general found its way into most non-ML talks. The TensorFlow ecosystem has grown by leaps and bounds, smaller projects based on TensorFlow have matured, and ML Kit, newly launched via Firebase, was my Big Takeaway from the conference. Let’s take a look at these areas.

    TensorFlow

    TensorFlow Developer Advocate Yufeng Guo gave a great talk on the topic, concentrating on defining the Seven Steps Of Machine Learning:

    • Gathering data

      Where does your data live? and how can I get to it? These are questions that need to be solve as you prepare for ML
    • Preparing the data

      This part takes a long time, as you need to massage your datasets into usable groups. Facets, a data visualization tool, might help.
    • Choosing a model

      What model is the right fit for your data and the problem you are solving? Do you need a Linear Classifier, a Deep Neural Network Classifier, a Deep Neural Network Linear Combined Classifier?
    • Training

      Take your training set and use it to train your model. Your model will make predictions which you can test for accuracy and then use to update your model.
    • Evaluating the model

      “Check your work!” Take your Test data, run it through your model, and test its accuracy.
    • Hyperparameter tuning

      Tweak your model by taking your training data and using it with your model to make predictions. Based on accuracy, test and update the model.
    • Prediction

      Use data that has not yet been analyzed by the model to make predictions. How did it do?
    Yufeng finished the session by offering tools that we can use today to build models, including the Colaboratory, Google’s very interesting version of Jupyter notebooks. He also noted that Kaggle Kernels and datasets are also available and very convenient for data scientists. Google’s own Cloud ML Engine can be used for no-ops TensorFlow training, or use Cloud APIs with pre-trained models or train in the cloud yourself with Auto ML. A machine learning crash course is available as is the TensorFlow playground.

    ML Kit

    In an ocean of new acronyms to parse in the world of machine learning at Google, ML Kit, “A Machine Learning SDK for Firebase” stands out. While TensorFlow underpins all these products, it’s easy to get confused about where Auto ML, ML Kit, and TensorFlow Lite fit into the ecosystem. Basically, they have different use cases.

    • Use Auto ML to use your own data and train it in the cloud.
    • Use TensorFlow Lite to bring those trained models to your mobile and embedded devices.
    • And use ML Kit from within Firebase to enable trained models to live on device and integrate with data flowing into Firebase.

    ML Kit has entered the world via Firebase, and while that seems initially a strange fit, when you think about it, it makes sense. Firebase, as Google’s PAAS offering, is positioned as the entry point for data flowing into your apps, so partnering with a way for ML algorithms to access this data makes good sense.

     

    ML Kit’s primary offering includes pretrained models similar to those available via Google Cloud’s ML APIs. “ML Kit comes with a set of ready-to-use APIs for common mobile use cases: recognizing text, detecting faces, identifying landmarks, scanning barcodes, and labeling images. Simply pass in data to the ML Kit library and it gives you the information you need.” And if these pretrained models don’t work for you, or you need something more custom, then you can train a model externally and import it for use in your apps. 

    For the NativeScript developer, boy oh boy are we in luck! Because we have the amazing Eddy Verbruggen who was lying in wait, with his epic Firebase plugin already prepared, all ready for ML Kit integration! And indeed, soon came the tweets we were all waiting for!
    And Eddy has released NativeScript + ML Kit into the wild! https://github.com/EddyVerbruggen/nativescript-plugin-firebase/blob/issue699-mlkit-support/docs/ML_KIT.md

    Machine learning on device has the possibility of really enhancing our apps in the most amazing ways, and the sky’s the limit in terms of where our imagination and creativity can take us as mobile developers. 

    We had a blast at I/O…now the real work begins as we integrate all this awesome stuff into our NativeScript apps!