3D Design: Journey Through Character Modeling

3D Design: Journey Through Character Modeling

3D Design: Journey Through Character Modeling

On June 10, 2018, I was at work minding my design business when our COO asks to meet with me. When we sat down to talk, he asked me to make our company’s Mascot (the Mobile Monster) in 3D in two months. I got to say I was a little surprised. Not just surprised, but super intimidated and uncertain. 

I had zero 3D design experience. None, except a little clay sculpting in high school and plaster sculpting in college, but that’s it, and that was, dare I say, 20 years ago. I had some big concerns, so I talked to one of my fellow designers who had dabbled in 3D design a little on his own time.

After some good discussions with my coworker, it was clear I needed to do some research. Like ‘how long does it take to create a custom designed 3D character with texture and color?’ It turns out it takes a fair chunk of time. These are my rough approximations based on my research:

  • The average student takes approximately 4 months to a year to master 3D modeling.

  • For experienced 3D modelers it takes an average of 3-4 weeks to create a high quality render. That’s if they have a high quality 2D character to work from.

So after finding out the above, I was in the unfortunate position of giving our wonderfully optimistic COO a reality check, which went over surprisingly well. He gave me to the end of the year based on my research. I sigh of relief, and a sigh of “what have I gotten myself into?

Let’s start with 2D

Based on research, I knew I needed to redesign our company’s mascot in 2D before I even started to figure out a 3D program. Shockoe’s mobile monster is a cute little black ape-like character with orange hair and a goatee. However, it needed some updating so I could actually see what its body looked like. As of now, it’s a black blob with an indistinguishable nose/mouth that we still debate in the office today. 

After an awful lot of character illustrations, I felt the monster was in a good enough place. I could now start learning a 3D design program, but which one? There are many. There’s 3D S Max, Cinema 4D, Maya, Blender, and loads and loads more. So which one? My COO asked me to pick the one that was free… of course. Blender it is.

Blender, I got to say, is pretty damn cool. It’s open source, meaning it releases the source code under a license; The copyright holder grants users the rights to study, change, and distribute it. One of the coolest things about Blender is that anyone can download it at zero cost, minus the need for a compatible computer.

So, how do I learn Blender?

So, how to learn Blender? Get to it Mr. blog writer. There are quite a few ways to go about learning Blender, but I will tell you how I learned Blender. First, I googled “Blender tutorials”, and the results were many. There are literally hundreds, if not thousands of Blender tutorials out there. Well, my wife found this guy who has a YouTube channel called the Blender Guru. After watching a few minutes from the first in a series of nine tutorials on how to make coffee and donuts, I knew I had found my guy. Andrew Price hosts the Blender Guru channel on YouTube and he just made me laugh. When you’re learning a complex program that’s pretty intimidating, laughing while you’re learning goes along way. 

I got quite a bit of heat from our COO for learning how to make coffee and donuts before trying to create our Mobile Monster. The point of making the coffee and donuts though was to learn the program and get comfortable with it before setting out to make something intentionally. 

One initial problem I faced was time. It takes a great deal of time to practice 3D modeling. As a beginner you will undoubtedly do it wrong repeatedly and it’s a major time-killer. You won’t be able to replicate what your instructor shows you sometimes, no matter how hard you try. When you can’t replicate it, you’ll have to google it multiple times before you type in the magic phrase for what you’re trying to accomplish.

One delightful thing I found is that when you are googling for answers, you’ll find that there’s an incredibly supportive community of 3D artists out there, willing to help you achieve what you’re trying to create.

When you first start dabbling in 3D modeling, you can pretty much design anything on a basic laptop. However, you won’t be able to render anything unless you have the appropriate graphics card/processor. Once your designs start getting pretty complicated your laptop will no longer be able to keep up with everything. You’ll know you need a better computer when ‘control/⌘ + z’ freezes the program. No one wants to work in design without ‘control/⌘ + z.’

Luckily our office had the computer needed to continue my training, but this meant I could no longer train at home, which also meant I had even less time to learn the program, unless I wanted to ignore my family and spend long nights at the office (not really an option in my house). 

Once I completed the coffee and donuts tutorial, I felt on top of the world. I had done it. I had created a three dimensional still life with lighting, texture, color, and personality. Even though I felt good about it, I hadn’t even scratched the surface for what I needed to accomplish. I had only replicated what someone taught me to do. I still needed to learn how to build a character from scratch. 

Now it was time to learn character design in Blender. This time I turned to Udemy. At Shockoe, we have an immersive media director and based on his recommendation, I started the Udemy course Blender Character modeling for Beginners instructed by Riven Phoenix. 

3D Character Modeling…different than Coffee and Donuts

Let me just state that Riven Phoenix is a brilliant 3D modeler and I might even say he’s a brilliant instructor. His teaching style, however, was night and day compared to Andrew Price. I must admit, it was a tough switch to Riven. Although he is brilliant, he is also somewhat drab. He taught me an entirely new way of 3D modeling, which made me feel small, since I thought I’d learned so much from my coffee and donuts. Turns out coffee and donuts is EASY comparatively! ‘Nooooo!’ I screamed to myself. But it’s ok, because as designers we have to remember there’s a million different ways to design the same thing.

Riven Phoenix teaches about a formula based system where one can design on a grid, snapping each movement into place with exact precision. Starting with just a single cube that is there when you open the Blender program. He teaches you how to model a 3D human figure, similar to Leonardo Di Vinci’s Vitruvian Man. When I first started the tutorial, I initially thought this would take more months to master than I had the time to learn. Little did I know how fast this tutorial would move.  Riven’s formula based system is a brilliant and efficient way to work on character modeling.

My window of time to make this Mobile Monster was shrinking, I knew it was time to shit or get off the pot. I didn’t think I was ready, but I had to at least try. So I took my 2D Illustrator designs, uploaded them into Blender, and began tracing my design with 3D cubes using the extrusion effect, which enables designers to pull or push material in any desired direction without affecting other areas of the 3D model.

After a few days of modeling this character, I began to realize that completing a detailed full body render would be nearly impossible in the time frame I had, but I had an idea. I had just come off the heels of working on an Augmented Reality (AR) effect on an app I had worked on for RVATech Women, where users could raise their eyebrows and see a pair of glasses or a superhero mask appear superimposed over their face. 

I knew I probably had time to design a face or head so I started to try to use Riven’s formula based system again, but focusing only on the face of the model I had already started. It turns out the formula based system didn’t leave much room for error for novices like myself. I was having an incredible difficulty modeling details that weren’t symmetrical, like curly hair. 

But, how does Pixar do it?

I decided it was time to reach out to our Immersive Media director for some objective guidance. I explained to him the issues I was running into. He asked if I had searched how Pixar artists created their characters, and to my dismay, I hadn’t. I thought I had figured out how I would create this 3D model, but little did I know that after my first Google search for “Pixar character tutorials” would open my mind to an entirely new way of 3D design yet again. Mind blown! I learned that by using flat planes, you could use the same extrusion effect to essentially map out the face of your character. Here’s the Blender Character Modeling tutorial by Darrin Lile that really helped me out. 

Once you have your 2D design uploaded into Blender, delete the cube they start you with and create a small plane and add a mirror effect to it. You can start your 2D modeling anywhere on the face you want, but since the tutorial I watched started at the lip area just below the nose, that’s where I started. Then you can extrude your plane around the mouth, nose, cheeks, chin, and so forth. If you’re like me, once you cover up the face, you might start getting excited.

Once I finished modeling a flat face, essentially a flat mask, now what? Now, this is the crazy abstract part of the process. At this point, you must import the profile (side view) of your 2D design. You need to scale your 2D design and your 3D model to the same size so you can trace your 2D design with the planes and vertices of your 3D model. You will switch from the front view to the side view, so it would be ideal to have both views open simultaneously if you want to work efficiently and not confuse yourself (and trust me, it can get confusing). 

From the front view, you’ll want to select the vertices that need to be pulled back, so you are creating a sense of depth for the face of your character. I started with the deepest parts of the face, like the back of the jaw. You will move each of these vertices back on the axis that pulls them away from the surface of the flat mask that you’ve created. 

Once you have pulled all of your vertices back to create the depth, you’re left with this weird spikey face that you don’t want. This is where it takes time to smooth out your character’s face and start defining their facial features to give them the basic shape you want them to have. 

Once you have a good face for your character, you must complete their entire head. I did this using the same method, extruding the planes based on the 2D design that I had imported into Blender. Once your character’s head is at a good stopping point, modeling wise, it’s time to have fun with textures and colors and bring your character to life. 

Adding textures and color is a lot of fun, but beware, the more complicated the effects you implement into your design, the longer it will take to render, and could affect loading time once you’ve exported your model and imported it into other programs like Unity or Spark AR. You can most likely disable any effects you put on your character in other programs when it gets down to the nitty gritty. 

PART 2: UV MAPS & SPARK AR

(10 min read)

Once I ‘finished’ my 3D model in Blender, it was time to import it into a program to make it an interactive object. For the sake of time and money, we used Facebook’s Spark AR. The only downside to this is that if a user wants to interact with the 3D object, they must have a Facebook account because you’re using Facebook’s platform for your project’s use. To have your object ready for importing in a program like this, your 3D model needs to have a couple things ready.

Check these points first:

  • It is likely you made your up of multiple 3D objects. Every object in your model will need to be the child of a parent object.
  • The parent object can be an invisible shape, but you need to center it on the x, y, and z axis, so that when your object pivots in any direction it’s pivoting from the center. If you didn’t center your object, it will probably be clear in the AR program you edit it in.
  • For every object in your model, you will need to create a UV map. You can generally do this in whatever 3D program you built your model in.
  • Until 5G is a common thing, you probably won’t be able to use any of the textures or colors in the 3D program you applied to your model. That’s ok though, because you can likely recreate a similar effect in Spark AR or Unity in much less time than it took to make in the 3D rendering program you used.
  • You will need to find or create high quality texture images for your model. I would recommend pulling each of your UV maps from your 3D program and import them one at a time into Photoshop. Then you’ll want to import the high quality texture image over the UV map. Use the stamp tool to spread the image texture out over all the UV map. Once you’re happy with the coverage of the UV map, you must hide the UV map and make a normal map out of the texture image. When the normal map modal pops up, you can then adjust how much depth you want to give your texture with the levels provided.

 Importing into Spark AR

Now that you’ve done all the dirty work and you have a gazillion files for your project it’s time to import them into Spark AR.

  1. Click ‘File’ and select ‘New Project’ and name your new project (⌘ N)
  2. Go to ‘File’ and select ‘Import From Computer’ (⌘ I) and select the 3D object you want to import
  3. Next, click ‘Insert’ and select ‘Face Tracker’ or whatever type of tracker you want.
  4. Find ‘Assets’ on the left toolbar and select your imported object and drag it to your tracker.
    1. At this point you should see your object moving with the person’s head in the scene. Exciting stuff!
    2. If the scaling is off or you hadn’t aligned your object with one of the x, y, and z axes, you can adjust some of these using the toolbar on the right when you select your model.
    3. If you click into the x, y, oz axis you can make any necessary adjustments and see real-time updates in the preview scene.
  5. Go to “Assets” again and select the + button, then select “Create New Material” and name it “Materials” or whatever makes you happy.
    1. This is where you will organize all the objects individually within your 3D model; i.e. FaceMaterial, RightEyeMaterial, RightPupilMaterial, LeftEyeMaterial, etc.
  6. Click “Assets” again and select the + button, then select “Create New Material” and name it Textures or Diffuse.
    1. This is where you’ll want to import (or drag and drop) all the files you have for the texture image, also known as a ‘diffuse.’ Make sure you’re naming everything accordingly to not confuse yourself and also to not confuse a developer who might work with these files.
  7. Lastly, click ‘Assets’ again and select the + button, then select ‘Create New Material’ and name it ‘Normal Maps’.This is where you’ll want to import (or drag and drop) all the files you have for the normal maps you created in Photoshop. Make sure you’re naming everything accordingly to not confuse yourself and also to not confuse a developer who might work with these files.
  8. Once you have imported every file associated with your 3D object, it’s time to assign your individual 3D objects textures (diffuses) and normal maps.
    1. In the left toolbar, make sure you’ve expanded your 3D model layers so you can see all the associated layers of your model.
    2. Select the first layer or 3D object in the expanded list. On the right toolbar under Materials, you’ll want to make sure you’ve checked the checkbox.
    3. Select the drop-down button and select the desired material.
  9. In the left toolbar, under Materials you’ll want to select the material you just assigned to your 3D object layer
    1. Selecting this material will change the options in the toolbar on the right for this material
    2. Under Shader Properties, select the Texture drop-down and select the desired texture (diffuse) file that you have uploaded to this project file.
    3. You can select a custom color as desired.
    4. Under Normal, check the checkbox and select the Texture drop-down and select the desired normal map for your material.
    5. You should see the changes in the preview scene and make adjustments as needed.
  10. Now just repeat steps eight through nine until all of your 3D objects have assigned textures (diffuses) and normal maps. Once you’ve assigned all your materials, you can play around in the program and adjust the levels as needed. There’s a slew of cool things I’m not covering, but that’s where you come into the program and learn for yourself.
  11. When you’re at a stopping point and you think you’re ready to package this bad boy up, scroll over to “Project” and select “Show Asset Summary”
    1. In the Asset Summary Window, you will see any files you’re not using. It would be ideal to trash them if you can. You want your file to be as small as possible, especially since you plan on having this live and perform on a mobile device.
    2. Once you’ve cleaned up your file, select the bottom right button “Compress All”.
    3. Scroll over to “File” and select “Export”
      1. Selecting “Export” will automatically tell you if your project is ready to export your compressed 3D model or not. You will see a list of device requirements and how your files compares to what it requires.

Ready to Submit!

From my experience, this was one of the most challenging parts of the process. Spark AR requires up to 1.3 MB to 2 MB, my file was initially 24 MB when I first tried to export my compressed file. I then had to work in reverse and recreate all my normal maps and diffuse files and change the size to 512 K in Photoshop in order to even come close to meeting these requirements. It took hours to get to “export ready,” but I was determined to make it happen. Ultimately, I had to sacrifice several design details that improved the overall design, but I’m not sure the average person would have noticed the difference or even appreciate the differences those details made but I noticed; It was a hard pill to swallow. However, with 5G in the horizon, hopefully these MB requirements will be a little easier to meet with a detailed 3D model.

To preview the Monster I created in Blender, and brought to life in Spark AR, please scan the following QR Code or click the button to experience it for yourself:

Jason Day

Jason Day

UX/UI Designer

Jason graduated from the Savannah College of Art and Design and holds degrees in Illustration and Sequential Art. His diverse professional careers have ranged from comic book artist, picture framer, retail store management, photographer, building inspector, and designer, demonstrating his ability to understand multiple facets of thinking and implementing them in intuitive ways.

3 Things to Consider When Designing Your First Voice App

3 Things to Consider When Designing Your First Voice App

3 Things to Consider When Designing Your First Voice App

“Ok Google, give me an intro about designing for voice.”

Darn. But in spite of Google’s assistant inability to help me write this blog, these modern voice assistants are actually pretty neat. And with huge amounts of investment from Amazon and Google, they are here to stay. Amazon has been aggressive with expanding it’s Alexa services this past year into new products like the Alexa Show and Google embracing an even bigger screen assistant push with it’s Nest Hub Max and the continued development of the assistant in the Android OS.

Voice is the closest thing we have so far to truly embracing the “Star Trek” future. Right now on the “Star Trek Future Scale” (STFS, obviously) we are still in our infancy, but here are some things to consider as you design and develop the voice apps that will help us grow the medium into something powerful and useful for decades to come.

 

 

1. Alexa, Google, and Siri… Oh my!

The different flavors of assistants all bring unique things to the table. We recommend before even attempting to design new experiences to take some time to understand the opportunities and limitations with each platform. Focusing in on the Alexa and Google experiences is a smart start, Siri was a non-starter with several of our projects, and Amazon and Google have excellent documentation to start out with for projects:

Using these as a baseline similar to how you might use Human Interface Guidelines or Material Design in your screen UI design helps ensure your teams are designing experiences that feel natural to the larger suite of services offered from these assistants.

With a firm understanding of the tech behind the assistant, we can move into some actual design.

2. Do your (user) homework

Voice is a new and powerful user interface, it can provide an immense amount of information with no need to worry about navigational hierarchy, where the user could or could not be looking or even what their intent is. We just have to predict it.

However, predicting WHAT they want is about as important as WHERE they will need it. Most people don’t want to shout into their phones in line at the grocery store.

The time spent into understanding your users habits and journeys will make it so when you provide that voice experience at those high value moments, it will feel precise and appropriate. The Alexa on the dresser that you can ask the weather while you are picking your clothes for the day perfectly fits into a user’s experience while making the hands free nature truly valuable. The home hub in the kitchen that can set a timer while a person’s hands are busy chopping, stirring and cooking makes for a seamless Star Trek esque experience.

Example of an overcomplicated, bad flow

3. No Fat Diet

Voice is one of the fastest and most semantic means to information that technology offers. The goal is to create a framework that provides the experience of: What you ask for, you receive. Lots of voice experience fail with great concepts because they don’t edit nearly enough. This ideal input/output experience is only possible if your voice service can get out of its own way. With voice, do your best to avoid having steps in any process. Conversational interfaces can be interesting, but they also can feel extremely like a robophone support system, and those don’t feel like Star Trek at all. They are bad. Lengthy responses are also a big no no. If the assistant is rambling on and on when a person asks a simple question, you can expect it to get the “Alexa, shut up” real quick. Navigational commands should be short, sweet and direct. Informative responses (the weather, information about a product, confirmation of an order) are best when they are short and provide means to expand if the user wants (Find out more in your ___ app, or ask for more information, things like that). Don’t answer a question with a question. While the question with a question model for education has been a very powerful one to create critical thought in students, your voice assistant should avoid this at all costs. In its current state, voice assistants still very much feel like computers, and users still treat them as such. Input -> output is a much more important experience than input->input->input->output, even if the end result output is somehow “better,” that experience is not.

That good flow

Prototype with the tech. This one we learned the hard way, but the sooner you can get the experience on a device the better. Map out your critical path and start testing it with a real Home Hub or Alexa. You’ll be able to iterate your flow much sooner, learn what words the device struggles with and really get a feel for your voice experience as your users will use it rather than it being refined in a doc until it is perfect.

Test early and often with the voice devices. Trust us, you’ll be much happier with your final product.

Mason Brown

Mason Brown

Mason Brown is an experience designer at Shockoe. His experience includes work on AR solutions for Benefit Cosmetics, VR and Retail design with Google, VR experiences with American Eagle as well many projects in the non-profit space promoting athletics in the Richmond area. With a passion for systems and product design, Mason works with clients in identifying business problems, understanding the needs of the user and crafting products that benefit both. He strives to understand the needs of people to make products that feel personal.

Google Flutter goes Beta at #MWC18

Google Flutter goes Beta at #MWC18

Google Flutter goes Beta at #MWC18

What is Flutter? 

 

According to Google, Flutter is a mobile UI framework for creating high-quality native interfaces for iOS and Android. As a Google Partner and a company that has focused on building cross-platform mobile solutions for individuals and organizations, it is amazing to see a product like Flutter be released into Beta.

 

Better than other Cross-Platform Solutions

 

First of all, this initiative is backed by Google, which gives it a strong start. Also, the performance and platform integration are seamless and the structure allows us to build at high speed with great performance on both major platforms (iOS and Android.) Sure, there are some bugs and shortcomings, but that is always expected in a Beta version. We are on a trial run and, so far, our team loves it.

 

 

The team at Flutter highlights the benefits best on their Medium Post (Seth Ladd, Product Manager @ Google for Flutter):

 

  • High-velocity development with features like stateful Hot Reload, a new reactive framework, rich widget set, and integrated tooling.
  • Expressive and flexible designs with composable widget sets, rich animation libraries, and a layered, extensible architecture.
  • High-quality experiences across devices and platforms with our portable, GPU-accelerated renderer and high-performance, native ARM code runtime.

 

As a cross-platform mobile application development company, we are very excited about this solution because we can start using it immediately with our current apps. We don’t need to write our complete app in Flutter, we can simply add new Flutter-based screens to existing apps. Flutter is better than most of the cross-platform solutions we use today because it allows us, not only to build for two platforms but to make changes to the source code and see the UI updates in seconds, making the development process significantly faster.

 

If you are interested in learning more about Flutter, please reach out to schedule an informational meeting.

 

google-flutter-goes-beta

 

Mobile World Congress (#MWC18)

 

MWC is one of the biggest events on the mobile calendar. This year, more than in the past, the focus is going beyond our traditional understanding of Mobile Apps and pushing into the connected life or what MWC is calling “Intelligently Connected.”

 

Follow Shockoe to keep up to date on the key themes this year:

 

  • Artificial intelligence and machine learning (AI & ML)
  • Forthcoming 5G & LTE enablement
  • IoT smart city technology and edge computing devices
  • Big data and analytics
  • Technology in society and net neutrality
  • Consumer smartphone and tablet devices

Design Tips to Increase Satisfaction in Banking Apps- Part 1

Design Tips to Increase Satisfaction in Banking Apps- Part 1

Design Tips to Increase Satisfaction in Banking Apps- Part 1

Retail banking consumers now prefer using their mobile devices more than any other bank interaction, which makes a mobile app a primary component of overall customer satisfaction. With greater ease switching banking providers at a moment of dissatisfaction, banks need to place extra emphasis on keeping their customers happy and loyal. This starts by giving customers the best tools available and a user experience that helps them access and navigate their banking needs without difficulty. Read more about our design tips for banking apps below. 

 

For the first section of this two-part series, we will cover examples of best practices that we have seen play a role in facilitating engagement and improving the user experience.

Any questions surfacing as you read? Give us a ring! You can always connect with us here.

 

Search & Navigation Part 1

Content Part 1

Guidance Part 2, coming soon

Privacy & Security Part 2, coming soon

Appearance Part 2, coming soon

 

 

Search & Navigation

search-navigation-components-of-app-satisfaction

According to J.D. Power, ‘Ease of Navigating’ is the key differentiator among top-performing mobile banking apps. If a consumer can find what they need in the app, this often yields a happy customer. This satisfaction can also impact bank operations by reducing calls to support centers with potentially aggravating wait times.

 

Let’s jump head first into some easily-executed ideas to help improve your app’s search & navigation as early as today.

 

Easy Login

 

Biometric logins such as fingerprint, face, or voice can facilitate a client’s access to their account.

easy-login-biometric-one-touch

Personalization Capabilities

 

Some banks give the user the ability to customize their application experience to their needs making each visit one that addresses their specific needs.

personalization-personalize

Using Navigation Icons with Label

 

An icon is meant to be universally recognized, but in many cases, they are not. It’s always a safe bet to provide a label next to the icon to provide clarity.

 

using-navigation-icons-with-labels

Use Plain & Simple English

 

Avoid using branded names that might be intuitive to your company, but not to a user. In short: use plain English when possible.

 

use-plain-simple-english-branded-names

 

Transaction History Search

 

Most banking apps default to filtering transaction history by date. Giving the user the ability to search their account is one more way to facilitate finding that specific transaction they have in mind.

 

transaction-history-search

Appwide Search

 

Few banks offer app-wide search to locate features & information. It might just be what your clients needed to discover new or undiscovered features.

 

appwide-search-my-bank

Clear ‘Back’ Access

 

Avoid using a home icon or cancel in place of a back.

 

clear-back-access-button

Autofill/Type-Ahead Searching

 

We continue to be surprised at the number of banks not make use of this simple yet effective interaction. Your customers will be thrilled to have it implemented.

autofill-type-ahead-searching

Content

 

The content that users access in-app should be concise, easy to find, easy understand, and help them reach their goals—simple right? Here are a few ideas:

 

Key Information Front and Center

 

Some applications give users the choice to view account their account balances before login.

 

key-information-front-center

 

Helpful Services

 

Provide customers with additional services that could help them reach their financial goals.

 

helpful-services

 

Real-Time Alerts

 

Use real-time alerts to keep customers informed on important account updates such as direct deposits, personal information changes, and bill due dates.

 

real-time-alerts

 

Avoid Hiding Information

 

Some banks hide interest rates behind an extra tap or elaborate application process. Be nice to customers and let them know what they need to know.

 

avoid-hiding-information

 

Avoid Jargon-Heavy Content

 

Avoid words such as Debit, Payee, APR — instead use Withdrawal, Recipient, Interest Rate.

 

avoid-jargon-heavy-content

Guidance Part 2, coming soon

Privacy & Security Part 2, coming soon

Appearance Part 2, coming soon

 

Editor’s note: 

We know you’re thirsty for more. Part 2 will be coming very soon! While you wait, check out our latest thoughts on UX Strategy for Banks. 

Have any additional questions or want to discuss what Shockoe can do for you? Click here to connect with us. 

3 Tips to Start Using Motion in Design

3 Tips to Start Using Motion in Design

3 Tips to Start Using Motion in Design

Motion connects the designers and developers who are working on a mobile application with its users. Scrolling, navigating through screens, and adding or editing content may all be inherent features of an app in 2017, but the app still needs to feel right. UX designers live for the challenge of making an app feel right to the user, and motion is one tool in their arsenal. As Shockoe tackles mastering this tool, here are three tips for how to start thinking about integrating motion into your designs.

Tip 1: Show the Relation

You’ve put in the work, made the sitemap, and even mapped out the flow. You know exactly how to get from Screen A to Screen Z. Do your users? It’s important to make sure your users will be able to navigate the app with the same fluidity you do. Probably the best option for ensuring this is one of the simplest: show your user where the screens are coming from.

Navigating from the leftmost tab to the one on the right? Show that by pushing your current content off-screen to the left, making room for the new content coming in from the right. Google Play Music is a fantastic example of how an entirely new page can originate from a single, much smaller element. It shows the growth of that element into a full page.

Tip 2: Don’t Lose the Users

This touches a bit on the last point, but it is key that you don’t confuse your users or lose them in a complicated motion. If you have too many elements moving in too many directions, or even one element moving too far, you may run into some problems.

An example of what to do and what not to do both come from different implementations of the same feature in different versions of the Android operating system. On devices that ran Android M, there was a hovering search bar at the top of the home screen. This was a great addition, bringing a Google search right to the forefront of the user’s most-frequented screen. As you might expect, the search automatically offered suggestions as the user typed.

On the newest Pixel 2, that search bar has been moved to the bottom of the home screen, just under the app drawer and just above the software buttons. A UX/hardware issue is solved here by allowing users to reach their search bar more easily, but a visual transition issue is created. When the user taps the new bottom-anchored search bar, it acts like before and is now on the top of the screen, populating your autofill search results. This is probably nit-picking and just requires some getting used to, but it makes the search bar feel like more of an “activation” and not a true, transforming element on the device’s screen. That takes away a bit of what made that simplicity in movement so special.

Tip 3: Have Some Fun. Find It, If You Have To

This point applies to everyone in design, but it holds special weight in designing motion as there is so much that can be done. This is more for your own sanity, but it’s very important in every project to have even a little fun, and not nearly enough people value taking a moment to do so. A solid check for this is looking inward and thinking about what you would want to see an app do.

Take 15 minutes, grab your notebook and a pencil, create some sketches, and just … go with it. Look at what’s been done in other apps, what hasn’t, and find what works for you. Don’t limit yourself to the mobile realm for inspiration; consider television shows, video games, etc., as well. The kind of work we’re most proud of is typically the work we enjoy making, so be sure to explore every corner of your creativity when designing motion.

So what are your thoughts?

Hopefully, these tips have helped you start thinking about the ways you can use motion in your designs. In this post, we touched on the basics of motion; we look forward to expanding on these ideas in a future post that dives deeper into the nitty-gritty details on how to make motion work in your apps. As you start integrating motion into your projects, reach out and let us know what you think, if you have any thoughts to add, or if these tips have helped you out in any way.

Editor’s Note: 

Want more tips on Design? Check out our most recent blogs:

10 Commandments of Designing for Accessibility Every Designer Needs to Know

How to Apply Minimalism to Complex Apps

Want to stay connected on all things mobile?

 

Sign up for the Shockoe newsletter and we'll keep you updated with the latest blogs, podcasts, and events focused on emerging mobile trends.

 

How to Create Moodboards that Help You Better Understand Your Client

How to Create Moodboards that Help You Better Understand Your Client

For new design projects, blank artboards can strike fear in many designers. Approaching a new task with a blank slate is never easy, especially under a deadline or without a clear objective. That blank rectangle can be the biggest block to any momentum in getting a project moving. But don’t despair! Using mood boards can help kick off any design project and remove the fear of directionless design.

What is a mood board?

A mood board is a collection of style—like color, texture, UI framework, or theme—that visually unifies a set of images. Part of what makes the tool so valuable is how malleable mood boards are to each project. Each board simply needs a cohesive connector that all the elements on the board stem from.

For example, this board shows off monochromatic color schemes:

And this one shows different examples of Google’s material design applied to different screens:

 

Having a theme to draw them together allows design teams and clients to react to different conceptual directions, allowing for different ideas to be explored early and with little time.

When should you use mood boards?

Mood boards should be implemented early. They are a fairly low-investment tool that you can use to get in step with your client/stakeholder/design lead early on. With clients at Shockoe, we use them early in our design process to get a sense of the client’s needs and tastes while also exploring some potential directions without burning too much time.

Potential times to bring mood boards into projects: Early design discussions, design concept pitching, stakeholder exercises.

How to make mood boards:

There are many ways to develop useful mood boards. When planning boards, a great way to start is to define your audience. Is the purpose of the mood board for you to discover tones and feels? Is it to guide a client or stakeholder in a certain direction? Defining your audience and purpose will help shape the direction of boards to create productive discussion.

Want to explore different color options? Group different items of the same color to get a feel for the application.

Want to look into different design systems? Show them together, or back to back.

It really helps to have a log of potential approaches and looks to help influence the direction of the project and discuss options before running with a certain design.

I tend to approach my mood boards in three steps: Think, Collect, and Organize.

1: Think

Starting out, I write down as much information as I can. I try to roll up all my knowledge on the project so far and start thinking of things I want to use for inspiration, like colors, products to be inspired by, people or places that should influence or impact the design, or just cool anecdotes I can recall from earlier meetings with a client.

2: Collect

After the brainstorming step, I create a project folder to store assets. I keep anything and everything I find interesting in that folder that could be used in the project. Pinterest and Dribbble are obvious choices when searching for inspiration, but I also find lots of really interesting ideas by looking at different mediums. Furniture, architecture, and game design blogs have all provided great ideas to consider for potential inspiration.

3: Organize

From here, I use my favorite tool for mood boards to collect and organize the different images designers and I have curated: Google Slides.

 

 

 

 

 

Google Slides acts as a fantastic repo to collect and organize thoughts. The ability to collaborate with other designers makes the tool even more awesome when working on larger design teams.

When laying out designs, I find it useful to pick a lead image and then build out elements around it. That way when discussing the project with a design director or client, you can orient the discussion around that image and define your intent with a style in that direction.

Presenting boards

The last step of using mood boards in design is getting a sense of which way to run when it comes time to start the actual design process. I’ve always enjoyed the candid and open discussion around mood boards. Planning out how you want the discussion to go will really help create a structured conversation that will benefit both you and the client. By knowing the discussion you want to have around the boards, you are able to ensure that the feedback you receive provides clear direction so you are not left with that dreaded blank artboard rectangle.

If you’re ever in a stalemate about which way a client thinks a project should go, or if you want to feel out some new or riskier design directions, mood boards are a fantastic tool to quickly define a style and explore different routes for the project. Don’t stare at blank art boards—go make mood boards!

Notes from Editor:

You can find more design blogs from our team on our blog page.

[INFOGRAPHIC] 10 Commandments of Designing for Accessibility Every Designer Needs to Know

How to Apply Minimalism to Complex Apps

Virtual, Augmented and Mixed Reality… Confused Yet?

Want to stay connected on all things mobile?

 

Sign up for the Shockoe newsletter and we'll keep you updated with the latest blogs, podcasts, and events focused on emerging mobile trends.

 

3 Ways to Improve User Engagement on Your Mobile Solution

3 Ways to Improve User Engagement on Your Mobile Solution

3 Ways to Improve User Engagement on Your Mobile Solution

After months of development, your app finally makes it onto the app store. However, a few weeks later, you take a look at the app’s analytics to find an unexpectedly high number of total uninstalls.

Why are users deleting your app and what can you do to improve user engagement?

1. Improve User Onboarding
A crucial, often overlooked process in designing an app is the user onboarding process. User onboarding is essentially the method in which the app introduces itself to a new user. Within the first few minutes of use, your app should make a solid first impression.

Resolution:
– Start the app off with a friendly tour to get the user acquainted with the main features
– Highlight features one at a time – do not overwhelm your user with introductions to all of the features at once
– Place mission critical information upfront and concisely
– Place user values upfront – You want the user to envision how they will be using your app in their day to day life as soon as possible.

Below are a few examples on user onboarding on Winn Dixie. Our UX and UI designers put great care into the onboarding strategy– putting the designs through various critiques and presentations with the client. User Onboarding testing was implemented as early as wireframes.

Winn Dixie app Iphone iOS

Winn Dixie grocery app

Winn Dixie App Grocery

2. Reduce Clicks
Ideally, a user wants to use the least amount of clicks to get to the information they want. Information or features buried into tabs and menus may infuriate users trying to accomplish a simple task. Sometimes the cost of effort may not be worth the payoff for a user.

Resolution:
To resolve these pains, consider bringing in various testers as early as the design phase. Sometimes paper prototypes can be very telling of a user’s engagement of an app based off something as simple as an app’s layout. Reduce the amount of effort a user has to make by designing the method of navigation with well-defined paths.

3. Debug your app

On first glance through reviews of a low rated app, the number one issue reported by users is: the app is buggy and keeps crashing. The bane to any user on any software is one that they can not use properly. Buggy apps can be caused by a multitude of occurrences. Here are the top three reasons why your app may be buggy and bugging your customers away:
– Android or iOS hardware and software have updated causing your app to be out of date
– Uncaught memory leaks
– Weak user testing

Late last year to in anticipation for the release of iOS 10, the Shockoe development team thoroughly prepared by catching up on documentation and thumbing through depreciated features. Apps like 21st Century were given an update to ensure that the app would not be out of date. Changes included improvements to security and touch ups on depreciated UI features.

Resolution:
Test the app thoroughly to find as many bugs as possible and prepare another cycle of development! At the end of development, put the app through another round of testing to ensure that your app is functioning as ideally as possible.

Positive user engagement is essential to maintaining users. While the suggested improvements drive to enhance user experience on your app, be prepared to take note and study of how these methods impact user interaction. Taking a closer look into what propels users to continue to use your app or what you find users interacting the most within your app will greatly help you analyze and improve positive points in your mobile solution.

Want to stay connected on all things mobile?

 

Sign up for the Shockoe newsletter and we'll keep you updated with the latest blogs, podcasts, and events focused on emerging mobile trends.

 

Virtual, Augmented and Mixed Reality… Confused Yet?

Virtual, Augmented and Mixed Reality… Confused Yet?

Virtual, Augmented and Mixed Reality… Confused Yet?

There are exciting new worlds being created, recreated and explored as we speak. There are digital worlds being developed from the inspirations of Earth and beyond. For those of us not able to travel to places like the polar ice caps, the Sistine Chapel, Rome, the Pyramids of Egypt, Mars, or other places we may not be able to visit in our lifetime, this is our chance. Now, we have the opportunity to visit them from the comforts of our very own homes.

Our mobile enterprise company, Shockoe.com has recently branched out into the brave new world of Virtual Reality (VR). In this ambitious new venture, there are many things to consider. First, let’s break down the different branches of the digital realities.

VR provides the user with a digitally created 360 degree world using a type of headset, whether it’s utilizing Google cardboard, an Oculus or one of the many other options of headset viewers. Augmented Reality (AR) uses our mobile devices and overlays digital images over physical reality (ever heard of Pokemon Go)? Lastly, and my favorite, there’s Mixed Reality (MR).

MR might be such an advanced technology, that we likely won’t see this catch on until VR and AR are more of a regularity. MR is the ability to use virtual reality inside of our physical world. For instance, a doctor performing surgery on a patient could use a virtual magnetic resonance imaging (MRI) or X-ray scanner over their patient, providing them with an accurate view inside their patient’s body. Mind-blowing, right?

Now that you have an idea of the different realities being created, let me tell you that there is nothing more exciting than having the opportunity to design the User Experience (UX) and User Interface (UI) for these exciting realities. When starting the conversation of UX for VR, it’s easy to get a little carried away. The possibilities seem endless (because they are), which is why it’s important to focus on what’s best for the user, what makes the most sense for the user to do in order to see and navigate our experiences. What does the client want to provide their users?

These questions are seemingly simple, yet necessary. A UX/UI designer needs to know what type of VR they are designing for. Is it for a headset alone, headset with camera sensors, or headset with gloves? What are the limitations of this experience? How far can the UX/UI designer push these limitations while still maintaining a fulfilling, yet positive user experience? What can I designer do to keep users returning to their fascinating VR experiences and even share them with others?

shockoe_vr_coneoffocusUsers with solo headsets can only use their Field of View (FOV) or Cone of Focus to make their selection, not their hands. While this might seem limiting, it’s not. Keep in mind that this is VR, where the user can turn in any direction they choose and explore a new world by just putting on a headset. Making a selection through vision is quite simple. A UX designer could use a countdown, various loading animations, or status bars. They can even invent something totally new and intuitive that hasn’t been thought of yet.

Making a selection is one thing, navigating these new worlds is another. There are a lot of different things to consider when navigating in VR. For one thing, it’s somewhat similar to navigating our physical world in terms of our FOV. We all have our own, some of us more or less than others, and the Cone of Focus is how designers segment the FOV.

The UX designer should focus the user’s primary actions within the initial area of vision. When we look directly forward, by just moving our eyes we can see approximately 90 degrees within our central vision. Everything outside of that is our far peripheral vision and should be designed accordingly by placing only secondary and tertiary user actions within these areas of vision, such as motion cues or exit options.

These are extremely important limitations to know when designing the UX for VR experiences. These degrees of vision define how the UX should be envisioned and implemented. Without making the user work too hard to explore their new digital surroundings, the UX designer must take into account the Cone of Focus for all primary actions without taking away from the extraordinary experience of VR. Thus, making one consider the visual placement of UX design by measurements of FOV degrees throughout the app.

While all of this information may seem overwhelming, it is also very, very exciting. Designing UX and UI in 360 degrees is a phenomenal opportunity to learn, adapt and innovate in this amazing new digital age. At Shockoe.com, we are on the edge of our seats with excitement about being able to provide our clients with the intuitive experiences their users want through innovative technology that VR offers.

Want to stay connected on all things mobile?

 

Sign up for the Shockoe newsletter and we'll keep you updated with the latest blogs, podcasts, and events focused on emerging mobile trends.