Google Flutter goes Beta at #MWC18

Google Flutter goes Beta at #MWC18

What is Flutter? 


According to Google, Flutter is a mobile UI framework for creating high-quality native interfaces on iOS and Android. As a Google Partner and a company that has focused on building cross-platform mobile solutions for individuals and organizations, it is amazing to see a product like Flutter be released into Beta.


Better than other Cross-Platform Solutions


First of all, this initiative is backed by Google, which gives it a strong start. Also, the performance and platform integration are seamless and the structure allows us to build at high speed with great performance on both major platforms (iOS and Android.) Sure, there are some bugs and shortcomings, but that is always expected in a Beta version. We are on a trial run and, so far, our team loves it.



The team at Flutter highlights the benefits best on their Medium Post (Seth Ladd, Product Manager @ Google for Flutter):


  • High-velocity development with features like stateful Hot Reload, a new reactive framework, rich widget set, and integrated tooling.
  • Expressive and flexible designs with composable widget sets, rich animation libraries, and a layered, extensible architecture.
  • High-quality experiences across devices and platforms with our portable, GPU-accelerated renderer and high-performance, native ARM code runtime.


As a cross-platform mobile application development company, we are very excited about this solution because we can start using it immediately with our current apps. We don’t need to write our complete app in Flutter, we can simply add new Flutter-based screens to existing apps. Flutter is better than most of the cross-platform solutions we use today because it allows us, not only to build for two platforms but to make changes to the source code and see the UI updates in seconds, making the development process significantly faster.


If you are interested in learning more about Flutter, please reach out to schedule an informational meeting.




Mobile World Congress (#MWC18)


MWC is one of the biggest events on the mobile calendar. This year, more than in the past, the focus is going beyond our traditional understanding of Mobile Apps and pushing into the connected life or what MWC is calling “Intelligently Connected.”


Follow Shockoe to keep up to date on the key themes this year:


  • Artificial intelligence and machine learning (AI & ML)
  • Forthcoming 5G & LTE enablement
  • IoT smart city technology and edge computing devices
  • Big data and analytics
  • Technology in society and net neutrality
  • Consumer smartphone and tablet devices

Comparing React Native to Axway Titanium

Comparing React Native to Axway Titanium

Here at Shockoe we often use cross-platform tools to build our apps. Using a cross-platform tool allows us to have one code base for apps that run on multiple platforms. There will be some platform specific code, but most things can be shared. Our cross-platform tool of choice is Axway Titanium. It used to be that cross-platform tools heavily leveraged WebViews. Tools like Cordova (ex PhoneGap) allow the developer to write a mobile website using HTML, CSS, and JavaScript. Then PhoneGap handles showing this content to the user inside of a native WebView. Instead of the WebView approach, Titanium gives you a JavaScript context and provides a bridge that handles interactions between the JavaScript environment and native components. Titanium stood out because it actually interacted with native components. But now Titanium is not the only framework out there that takes this approach. A couple years ago Kyle took an early look at React Native. Let’s take another look and see how React Native has come along.

Getting Started

Start off by heading over to the React Native Getting Started page. They offer two options: Quick Start and Building Projects with Native Code. I have not tried the, now default, Quick Start option. Several documentation pages refer to needing to “eject” your application if it was created from the Quick Start. For that reason alone I have only used the Building Projects with Native Code option.

There are a few dependencies to install, but the guide walks you through what you need. You will need NodeJS and the watchman package for observing changes. You will also need to install the react native cli. Additionally, you will need Xcode if building for iOS and Android Studio if building for Android.

Once you’ve got the dependencies installed you create a new project with the CLI:
react-native init AwesomeProject

Running the App

With no changes to the code base, you can immediately build the app you just created. In a Titanium project, all builds are handled through the Axway Appcelerator CLI or Axway Appcelerator Studio. This is not the case with React. It seems you can only build to an iOS simulator, Android emulator, or Android device with the React Native CLI. To do this you use either:
react-native run-ios
To target iOS simulator. Or:
react-native run-android
To target an Android device or emulator.

The options provided with these commands are a little lacking compared to the options with the Axway Appcelerator CLI. In my time with React Native, every simulator build chose the iPhone 6 simulator. I could not find an option to specify a different simulator with the CLI. Additionally, the CLI does not handle multiple connected Android devices well. You need to only have a single connected Android device or running emulator.

So how do you target other iOS simulators or build to an iOS device? Open Xcode! From there you use the same build options that a native developer would use. This is a huge difference from Titanium that basically discourages the use of Xcode for anything but building native modules. If you’ve never done native iOS development this can be a little daunting at first. It’s simple enough to find the play button and drop-down to select your build target. But what if you want to do an adhoc distribution build? Fortunately, there are plenty of resources out there for learning Xcode.

How about Android builds? This is an area that I am not as familiar with. Because the React Native CLI is capable of building to a device, I haven’t tried to build the project with Android Studio. I have generated a signed APK. The React Native documentation has a guide, but it comes down to using gradle.

Editing the App

React Native does not provide an IDE like Axway Appcelerator Studio. The documentation does suggest taking a look at Nuclide. Nuclide is a package for Atom that claims to setup an environment for developing React Native. I found I wasn’t taking advantage of its features, so I uninstalled it after a couple days in favor of just Atom.

So you can open the code in a text editor, where do you go from there? With a Titanium project, at least an alloy one, the entry point is alloy.js. From there the index controller has loaded first automatically. React Native provides entry points at and index.ios.js. From there you can load whatever components you wish. The simplest thing to do is to edit some of the text provided with the sample project. Once you’ve made an update you can easily see your changes without rebuilding your app!

Axway Titanium provides a live view feature to see your app update as code changes. React Native offers a similar feature. On simulator you can press command + R to reload the code from the React Native packager. On an android emulator you can achieve the same thing by tapping R twice. Reloading can also be accessed from a built-in developer menu! To access the developer menu simply shake your device. You will see options to reload, enable remote JS debugging, enable live reload, and more.

Debugging Your Code

Axway Titanium attaches a console to builds made directly to a device, emulator, or simulator. The React Native process ends as soon as a build is installed and does not attach a console. Instead, you can enable remote debugging through the developer menu and debug your app in Google Chrome. You do not see a DOM representation of the app, but you do get access do the console and debugging tools! The debugging is done over TCP, so you don’t need to have built on a device connected to your computer. Inside the developer menu, you can change the URL used for remote debugging so you can debug as long as the device and machine running Google Chrome are on the same network.

Moving Forward

This has only been a brief look at getting started with React Native. In the future, I would like to revisit this topic to discuss more configuration, component driven design, and interacting with native code. React Native is very young, but it has come a long way in a short period of time. I am very excited to see how it matures as a cross-platform framework.

Interested in what it would take to kick off your project?

Our experience and core services include strategy & transformation, user experience & design, mobile application development, and API management.

Kotlin: Three Reasons To Start Using It Today

Kotlin: Three Reasons To Start Using It Today

With the announcement at Google I/O 2017 that the Kotlin programming language will be officially supported as a first class citizen for the Android framework, there’s been a lot of talk around what Kotlin is and how it compares to Java. This post highlights the reasons why our development team at Shockoe feels that Kotlin is the wave of the future and why Android developers should start adopting it.

What is Kotlin?

Kotlin is a statically typed programming language that runs on the Java Virtual Machine (JVM). It’s a multi paradigm language that contains elements of object-oriented programming that you’d see in languages like Java and elements of functional programming like what you’d find in JavaScript.

Why should you start using Kotlin?

Here are our top three reasons why you should jump in:

#1 Easily integrated into your mobile stack

Kotlin code is compiled into the same bytecode that your regular Java programs are and uses the Java Virtual Machine (JVM) to execute that code.
This means that Kotlin still has access to all the same libraries and frameworks available to you in the Java world with the addition of those from the Kotlin standard library. This also allows Kotlin and Java code to run concurrently with one another. Java classes calling methods in Kotlin classes or libraries and vice versa. This can even be done in the same file. Take this example from a library that handles unit conversion:

Here we have a function that takes a Double parameter and returns an Int. However, we want to use the java.lang.Math class to round as this feature doesn’t exist in Kotlin. So we round to the nearest place and call a method from the Kotlin Double class to convert the result into an Int.

This duality of execution allows developers to easily convert their existing Android projects from Java to Kotlin or to simply add new features written in Kotlin, without converting previously written code.

Additionally, Kotlin has an option to compile into JavaScript which is compatible with the standard JavaScript specifications like Asynchronous Module Definition (AMD), CommonJS, and Universal Model Definition (UMD). This allows developers to share code written in Kotlin to other environments like Node.js or even to cross platform environments like to Appcelerator’s Titanium framework.

#2 Multi-paradigm language

A lot of the developers from Shockoe come from multiple different backgrounds. Some started with Java and transitioned into writing JavaScript while others started with JavaScript and have since learned about other languages.

Kotlin adds a lot of functional features to the object-oriented nature of Java. I realize that Java 8/9 adds similar features but this post is specific to the Android platform.

These features coupled with improved/sugar syntax lead to a much more easily read codebase. I won’t go over all the features but some of the most prominent ones are higher order functions, null safety, better typing with type inference, and much less boilerplate code.

These features, in particular, allow a developer to write much cleaner code and a lot less of it. Here’s an example of some Java code to perform and common action – filtering a list:

This isn’t too terribly much but it can quickly spiral out of control as you start adding more complexity. The Kotlin equivalent would look like:

Yep, that’s it. There are many more operators that can be appended to the end of that, for instance if you wanted to map the results to strings and return the string list you just make one minor change.

#3 Official Support From Google

No brainer here, right? However, the announcement from Google I/O 2017 is a huge deal for the language. In addition to the benefits of Kotlin over Java such as those detailed above, Kotlin will now have full support in Android Studio and within the Android ecosystem. JetBrains and Google are working together to continue to support the language into the foreseeable future.

Kotlin is by no means meant to replace Java. However, it will allow for better apps to be written using a much more modern and architected language that keeps developers in mind.


Now is a great time to jump into Kotlin and to start writing your Android apps with it. It will lead to better productivity for your mobile team, as they’ll be writing less code – which will be more readable and therefore easier to maintain.

Additionally, if you’re a multi-platform development team, the cross compilation into JavaScript is a great addition as you can easily create tools that work within frameworks for both languages.

Then there’s also the similarities between Kotlin and Swift as is highlighted here. This helps  bridge the gap between iOS and Android development teams.

Additional Resources

Official Kotlin Documentation

Sample Kotlin App

Kotlin Android (Layout) Extensions

Anko – Library with many Kotlin helper tools for Android

Kovenant – Promises for Kotlin

Interested in what it would take to kick off your project?

Our experience and core services include strategy & transformation, user experience & design, mobile application development, and API management.

Virtual, Augmented and Mixed Reality… Confused Yet?

Virtual, Augmented and Mixed Reality… Confused Yet?

There are exciting new worlds being created, recreated and explored as we speak. There are digital worlds being developed from the inspirations of Earth and beyond. For those of us not able to travel to places like the polar ice caps, the Sistine Chapel, Rome, the Pyramids of Egypt, Mars, or other places we may not be able to visit in our lifetime, this is our chance. Now, we have the opportunity to visit them from the comforts of our very own homes.

Our mobile enterprise company, has recently branched out into the brave new world of Virtual Reality (VR). In this ambitious new venture, there are many things to consider. First, let’s break down the different branches of the digital realities.

VR provides the user with a digitally created 360 degree world using a type of headset, whether it’s utilizing Google cardboard, an Oculus or one of the many other options of headset viewers. Augmented Reality (AR) uses our mobile devices and overlays digital images over physical reality (ever heard of Pokemon Go)? Lastly, and my favorite, there’s Mixed Reality (MR).

MR might be such an advanced technology, that we likely won’t see this catch on until VR and AR are more of a regularity. MR is the ability to use virtual reality inside of our physical world. For instance, a doctor performing surgery on a patient could use a virtual magnetic resonance imaging (MRI) or X-ray scanner over their patient, providing them with an accurate view inside their patient’s body. Mind-blowing, right?

Now that you have an idea of the different realities being created, let me tell you that there is nothing more exciting than having the opportunity to design the User Experience (UX) and User Interface (UI) for these exciting realities. When starting the conversation of UX for VR, it’s easy to get a little carried away. The possibilities seem endless (because they are), which is why it’s important to focus on what’s best for the user, what makes the most sense for the user to do in order to see and navigate our experiences. What does the client want to provide their users?

These questions are seemingly simple, yet necessary. A UX/UI designer needs to know what type of VR they are designing for. Is it for a headset alone, headset with camera sensors, or headset with gloves? What are the limitations of this experience? How far can the UX/UI designer push these limitations while still maintaining a fulfilling, yet positive user experience? What can I designer do to keep users returning to their fascinating VR experiences and even share them with others?

shockoe_vr_coneoffocusUsers with solo headsets can only use their Field of View (FOV) or Cone of Focus to make their selection, not their hands. While this might seem limiting, it’s not. Keep in mind that this is VR, where the user can turn in any direction they choose and explore a new world by just putting on a headset. Making a selection through vision is quite simple. A UX designer could use a countdown, various loading animations, or status bars. They can even invent something totally new and intuitive that hasn’t been thought of yet.

Making a selection is one thing, navigating these new worlds is another. There are a lot of different things to consider when navigating in VR. For one thing, it’s somewhat similar to navigating our physical world in terms of our FOV. We all have our own, some of us more or less than others, and the Cone of Focus is how designers segment the FOV.

The UX designer should focus the user’s primary actions within the initial area of vision. When we look directly forward, by just moving our eyes we can see approximately 90 degrees within our central vision. Everything outside of that is our far peripheral vision and should be designed accordingly by placing only secondary and tertiary user actions within these areas of vision, such as motion cues or exit options.

These are extremely important limitations to know when designing the UX for VR experiences. These degrees of vision define how the UX should be envisioned and implemented. Without making the user work too hard to explore their new digital surroundings, the UX designer must take into account the Cone of Focus for all primary actions without taking away from the extraordinary experience of VR. Thus, making one consider the visual placement of UX design by measurements of FOV degrees throughout the app.

While all of this information may seem overwhelming, it is also very, very exciting. Designing UX and UI in 360 degrees is a phenomenal opportunity to learn, adapt and innovate in this amazing new digital age. At, we are on the edge of our seats with excitement about being able to provide our clients with the intuitive experiences their users want through innovative technology that VR offers.

Interested in what it would take to kick off your project?

Our experience and core services include strategy & transformation, user experience & design, mobile application development, and API management.

Google Glasses are here

It’s the innate nature of Google to constantly push the envelope, advancing technology and making life easier for us all. With its many years in contemplation and development, Larry Page and Sergey Brin have finally brought a long term dream to reality.

In addition to the self-driving car, the Google (x) department has developed yet another product that proves to challenge our idea of what the future may hold, with all of its glorious possibilities.

Upon presenting its techno-stylish and highly futuristic augmented reality glasses, Google has confirmed that Project Glass is in full effect. What the Google Glass is offering is certainly a spectacle, (no pun intended).  Almost reinventing the meaning of a hands-free device, the glasses gives users the same full range of activities that ordinary smart phones provide and definitely opens up new possibilities that other devices are just not capable of. Using transparent and interactive imagery that is positioned in your field of vision, Google Glass takes information that is usually accessed via search engine and media sources, and places it directly in your range of sight. Now, weather updates phone calls, and even Google Maps, are easily accessible with Google’s cool gadget.

What could that mean for app development? A whole new range of apps is sure to spawn from this creation, entirely changing how we interact with the apps on our mobile devices. Imagine being able to navigate through unfamiliar cities without having to check to see if you are going the right way because the directions are right in front of  you. Entire virtual worlds could be blended effortlessly into our range of sight, creating the ultimate gaming environment, rather than viewing them through the screen of a smartphone, an entire virtual world would be.

Although highly anticipated, it is still in its beta testing phase, so we can only imagine the possibilities that the Google Glass may bring. For this mobile device I envision an entirely new frontier as far as app development goes, but until then my guess is as good as yours.