Debugging Titanium Applications using Safari Web Inspector

Debugging Titanium Applications using Safari Web Inspector

Debugging is one of the most frustrating aspects of software development of any kind – it is also one of the most essential. Finding a malfunction can be time consuming; therefore, it is important to have effective tools that can decrease your debugging time. For Titanium, most of my debugging consisted of log statements and alerts. While this method can be useful, it can also be a little time consuming to rebuild and to log a different variable, collection or model.

One of my coworkers saw me using this log for debugging and suggested an alternative: using Safari Web Inspector. I was very surprised at how easy it was to set up and how effective it can be throughout the process. This one line is all you need to add to your “tiapp.xml” file in your project:


under the <iOS> flag. Unfortunately, this method only works on an iOS simulator. Once you have updated your tiapp.xml, build your project and navigate to the page you would like to inspect. Next you will need to open Safari; if the develop tab isn’t visible you will need to follow a couple extra steps:

Select the Safari tab from that dropdown navigate to preferences then check “Show develop menu in bar.” After the Develop tab is visible you will open the Simulator option and then select JSContext.

This is where all the magic happens. The files where breakpoints can be inserted will be visible on the left panel of the screen. Breakpoints are very convenient for stepping through your code and seeing exactly what is happening. I suggest opening the right panel when the breakpoints are hit. This is where you will find local variables and can also add Watch Expressions. Watch Expressions is the place where you can add the variables that you would like to keep an eye on. You will be able to see and follow each variable through every step of your code.

The bottom console is also a very helpful aspect of this debugger. I use this for taking a look at any model or collection to inspect in detail what they contain. This has been a lifesaver for me. It makes it easy to investigate exactly what is going on with any unexpected behavior with your models or collections.

The safari web inspector has it’s problems and will, from time to time, crash the app – but overall this tool has helped me immensely debugging my titanium apps. It makes it so effortless to nail down exactly where the problem lies. As much as we all want to have flawless code without bugs, they will appear every once in awhile. However, this tool can save you from the frustration those bugs can cause. As I stated before, it is very easy to set up, so jump in and play around with it a bit. Have any questions or comments? Feel free to share your your tricks for debugging. Also, you can find our latest apps and check out our work here.

Editor: In case you need to know other ways we used to debug Titanium Apps, please also check Appcelerator Titanium iOS Debugging with XCode or Rapid Titanium WebView debugging with Chrome Developer Tools


Improve Productivity with Atom and Oh My Zsh

Improve Productivity with Atom and Oh My Zsh

With every new technology, comes new challenges along with it. These challenges can entail learning a whole new language, a completely different environment, or an entirely different way of thinking. New development platforms are emerging everyday. With the latest introduction of Virtual Reality, Augmented Reality, and Voice Recognition into the mainstream product line, the ability to learn and utilize these technologies to stay relevant has become key to creating innovate applications. All new technologies may seem like a monster to tame. It takes the right tools for the job to create an enjoyable workflow. Leaving you time to focus on what is new, and not what you already know.

Most recently here at Shockoe, I have been working in web development to tackle a project based around Angular 2 and a RESTful API service with Hapi for Node.js. Working mainly with iOS development during the majority of my time here, transitioning to these platforms held its own set of challenges. XCode Integrated Development Environment (IDE) for iOS development greatly helps to guide and assist the developer to correct syntax errors, track source control changes, assists in library/framework references, and warn you of potential issues before compilation. With Angular 2 and Hapi, the tools that are needed to be used to assist in development are left up entirely to the developer.

After working for some time with a closed source text editor Sublime, I was introduced to an open source text editor Atom. Atom advertises being modern, approachable, and hackable to its core. It is a text editor for developers, by developers. Being open source, anyone can collaborate and assist in improving Atom for everyone. This allows Atom to be much more expansive and reach a wider audience of needs for the text editor. Atom has over 5000 packages that can help with completion, linting, source controller, formatting, and much more. Atom’s support for packages and customization allows you to tailor the text editor to your specific needs. The auto-completion features of Atom provide built in suggestions for syntax completion by looking through the open solution and buffers to match strings. Atom also has configuration settings to standardize the format of the code you write; keeping it organized and easy to read. Also, users can install and set different Themes to alter the UI of Atom.

As I began development in Angular 2, it was faced with a problem. It’s written in a new language, Typescript. While the concepts were not new, the syntax for implementing them were. The Atom-typescript package gave me everything I needed to ensure that I was writing in Typescript properly and without having to reply on catching small issues at time of compilation. This greatly boosted my productivity and confidence when working with this new framework. Some source control packages that have posed the most use to me are Git Blame and Merge Conflicts. Git Blame will show you the last person to make edits to a file line by line. Merge Conflicts will allow you to detect potential merge conflicts before they happen so they can be resolved before you put up that next pull request. Atom has some great extensibility into Git source control, but what it lacks can be compensated by the terminal. More specifically, with a bash shell alternative, Z-shell (Zsh).

Z-shell (Zsh) is a Unix shell that is designed for interactive use. It contains many of the features of bash, but incorporates many of its own to improve upon bash. Zsh improves upon bash in a few different ways that make it much more interactive and user friendly. Zsh “cd completion” will list all sub directories using the command “cd <tab>” or “cd d<tab>” in a case sensitive and well formatted manner to allow the users to traverse the directories with the <tab> key. Allowing the User to quickly and easily chose the preferred destination without knowing the exact path by memory. These commands can then be chained to complete full directory paths. “Git completion” assists the user in finding the necessary git commands more quickly. For example, when changing a branch in bash, the user will need to know the full case sensitive name of the branch. With Zsh, a user could write “git checkout <tab>” and be presented with a well formatted and easy to traverse list of branches for the given repository. I have only touched on a couple of the useful features of Zsh but there are many more ways Zsh can be used to make life using the terminal easier.

Zsh really shows it colors when combining it with iTerm 2 and Oh my Zsh. iTerm 2 is a terminal emulator that extends terminal features into allowing for Split panels for multiple terminals in a single window, paste history from your clipboard, and configurability to change the appearance. Oh my Zsh can be used in conjunction with iTerm 2, or the MAC terminal, to manage Zsh configurations. The framework provides plugins and themes to further extend the features of your terminal. The themes provided by Zsh are great and let the user add a touch of personality to a historically boring prompt.

As our team began working on this new Web Development project, we needed to be able to manage our tasks, record time taken for each, and be able to jump between other projects we are working on.  Oh my Zsh and the ‘Avit’ theme have helped me to manage my time and tasks. With the theme, I can see what my current branch is in any given directory. The branch will be listed alongside the current directory path to indicate where my source is pointing to. Along with the branch name, an indication is given by a checkmark or “X” symbol to notify me of a clean directory or if I have outstanding changes that need to be committed. These two indicators alone save time from listing branches and checking statuses often, thus increasing productivity. A big feature of this theme that is not seen in many others, is a time stamp from your last commit. Along with branch and commit status information, a timestamp in seconds, hours, minutes, days is shown to indicate how long you have been working. Many times during the development process a developer can lose track of time while focused on the task at hand. Leaving the developer scratching there head and giving a rough estimation of the time it took them. With this handy tool, time estimations can be much more accurate.

With these tools combined, productivity can be greatly increased to improve workflow. A decluttered workflow allows for a decluttered mind. Whether it’s a new technology or an existing one, either of the tools can be used to boost your productivity and confidence to taken on a monster problem. Atom and Oh my Zsh have many more features and I only scratched the surface of what is possible. Check out the Atom website and Oh my Zsh repository for more information on features and how to get started. I highly recommend implementing these tools into your current setup to see how you can make them work for you. 

Express HTTP servers with Node.js

Express HTTP servers with Node.js

This week we continue our series on building better web services with Node.js by taking a look at the Express web application framework. Unfamiliar with Node? Take a look at last week’s blog post to find out what you’re missing.

Why Express?

Express provides all of the tools you need to immediately be productive working on web applications in Node.js.  While Node provides a number of built-in networking APIs, they are as a group cumbersome and somewhat unintuitive to work with, forcing you to do a large amount of repeated setup any time you want to set up another endpoint, route, port, or HTTP verb.  This is where Express comes in.  We can use Express to handle all of the repeat setup tasks associated with building a HTTP server, and spend more time focus on getting the core logic and UI of our applications correct.

Getting set up

To start using Express, just do a normal NPM install:

yourname@domain > npm install express -g

Optionally (but highly encouraged!), you can also install the Express-generator package to provide a convenient terminal command to set up a well-structured Express project. Again, NPM is the bomb.

yourname@domain > npm install express-generator -g

We’ll probably have to install some more dependencies for the generated project later, but this is enough to start rolling.

Making your Express project

Now that Express and Express-generator have been set up, let’s generate an Express project. Express-generator comes with a number of command line options, but for the most part uses sensible defaults. One notable point of contention is the default template engine: Jade. Express-generator can also generate projects using EJS, Handlebars, or Hogan.js. While I personally prefer Handlebars for most purposes (it does a great job of separating logic and templates in a clean fashion), for the purposes of this article I’ll be using EJS in interest of staying as close to Javascript as possible. To generate an Express project using EJS templates, navigate to your project’s intended parent directory and run:

yourname@domain > express --ejs {{YOUR PROJECT NAME GOES HERE}}

This will place a simple Express project configured to template its web pages using EJS in the specified subdirectory. Next, you’ll need to navigate to that directory and run NPM install to resolve your dependencies.

yourname@domain > cd {{YOUR PROJECT NAME GOES HERE}}
yourname@domain > npm install

Note that you don’t provide any arguments to npm install this time. This means that npm will look inside the Node project’s configuration file, package.json, for a list of dependencies to install. This is the same way that you would download dependencies if one of your friends or coworkers gave you a link to a Node.js project to collaborate on. Specifying dependencies like this allows you not to include your dependencies on the project itself, and gives other developers an easy way to resolve these dependencies.


Cool… So what just happened?

Express generator sets up a very simple Express project, configured to perform common backend tasks like parsing requests, setting up routes and complex endpoints, and serving a public directory for client-side assets. Let’s take a look at the Express application’s entry point, app.js.

There’s a lot going on here, so let’s break it down by chunks. In the spirit of providing an overview of Node.js, let’s look at the require statements first.

Require is an extraordinarily powerful tool for looking up libraries and external dependencies within a Node.js project. When you pass a string to require, it will try to look up a properly packaged CommonJS or native Node.js module, keep a reference to the loaded module, and return it to the caller. Then, if you try to load the library again, the cached reference will be returned rather than attempting to load a fresh instance. This lets you efficiently manage dependencies and script loading with almost no effort.

Taking a closer look at the require block, you will notice it is split into two halves, based on the type of require that is happening. The first group is loading pre-packaged Node modules, installed using npm. Require first checks your project’s node_modules folder for these files, and then checks the global npm cache. Express is a good example of a library that will be installed globally (we supplied the option -g to npm install earlier, meaning the library was installed at a system level), whereas the other modules were downloaded to the node_modules folder by NPM install, and will be loaded from there.

The second group of requires loads libraries local to the current project. This type of require uses normal relative file paths (. represents the current directory, .. is the parent), and will attempt to load javascript files relative to the current file’s parent directory. These statements load a pair of router files which will define a set of endpoints. We’ll take a closer look at these later.


Now let’s start serving

Now that we’re done resolving our dependencies, we can actually start initializing our Express project. Let’s take a look at the next couple of lines in app.js.

We do a couple of important set up tasks here. The obvious big one is creating our Express instance, but we also set up how we will be rendering our web pages and where to find them (in the views subdirectory), and begin to set up the actual logic of the Express application.

Express is built on the concept of middleware functions, small little functions that are each capable of processing or responding to a request, or sending the request to the next piece of middleware capable of handling your request. You can use middleware to set up generic handling for all endpoints, specific handling for endpoints matching some route, or specialized handling for an individual endpoint. This is done primarily via the use function.

use has a couple of nuances that allow you to set up complex server-wide handling easily. Looking at the use calls, you can see that some calls are passed a single argument, while others receive two. The single argument calls apply the selected piece of middleware to every request that reaches the server, while the two args version allows you to restrict the middleware to operating on a single route. We set up our pre-generated routes at the end of this call.

Next, we set up a couple of custom middleware functions.

Again, there’s a lot going on in this series of calls, so let’s break down what’s happening. Middleware is ultimately some function that will be executed, and these functions always receive the same three arguments, in the same order. The first argument is the request object. The request object is where you will look to get information about an individual request coming to your server. The request can be preproccessed by middleware, and since all pieces of middleware receive the same object reference for a given request, all other pieces of middleware will receive this information. That’s how the bodyParser library from earlier works. It allows JSON payloads to automatically be parsed and turned into a request body object, rather than plain text.

The second argument is the response object. The response object is used to do things like set HTTP status codes or response headers, report request progress back to the client, and actually send your response to the client when you are done handling the request.

The third argument is the next object. This is a reference to the next function in the middleware stack. You can call next to indicate that you are done attempting to handle this request, and that the next piece of middleware should make its attempt at handling. Middleware is run in order of execution, so the first thing that you app.use that matches the request’s path will be executed first, proceeding until either a response is sent or we run out of middleware to attempt to handle the request with.

There is a special case of middleware, namely error handlers. Error handlers must accept exactly four arguments (e.g. the function’s arity must be four). The last three arguments are the same as a normal piece of middleware, but an error object must be accepted as an additional first argument

Taking a look at the middleware added at the end of app.js, we have a single piece of standard middleware that will handle any routes that do not have middleware set up to respond, and will return a 404: not found message instead. The remaining middleware defines the development and production error handlers. As you can see these are four-argument functions, and are as such error handling middleware. Since the first piece of error middleware sends a response, in the dev environment we’ll return error messages with full stack traces, whereas in production we will merely notify the user that an error occurred.

Finally, on the last line of app.js, we export our Express application using the normal commonjs module.exports syntax.


So, about those route things…

Routes are a powerful tool that Express gives you to organize your code in a sensible fashion. Express wouldn’t make for a very good framework if we had to put everything in app.js, now would it? Let’s take a look at one of the generated route files that we required earlier.

As you can see, the way a router works is very similar to the way to how your base Express application works. You initialize an instance of your router, register some number of pieces of middleware, and export it for your consumer (almost always your Express application). One notable difference is that instead of using use to register middleware, we are using a function named get. get is a convenience function for registering middleware that only executes for requests with the HTTP verb GET. Similarly, there is a function for PUT, POST, DELETE, and the other HTTP verbs as well. The second nuance of routers is that they inherit the base route of where they are used by the application. For the users route for instance, all endpoints will start with /users, and then have any additional route tokens appended to the end. Additionally, you can nest routes as deeply as you need to to achieve your desired code organization.


Cool, but how do I start it?

NPM comes to the rescue again. Express-generator set up the project’s package.json with a pre-defined start command, which will set up your application to run on a port defined by your shell’s environment variables, or the default port of 3000. Take a look at bin/www to see how this works, or just run npm start to get your application running! Note that npm start will take control of your current console and use it for logging, you may want to direct your output to a file by running something of the form npm start >> log.txt


So where do I go from here?

So far we’ve covered setting up your node instance and configuring it to serve a web service using Express, but there’s still a lot of learning to be done! In the coming weeks, we’ll go over how to take this basic Node.js HTTP server and configure it to serve templated web pages using EJS, we’ll go over how to configure a simple noSQL database for Node.js using MongoDB, and we’ll pull it all together to write a simple push notification server, controlled through a RESTful API.

Shockoe and Shaka Smart Basketball Camps Team Up

Shockoe and Shaka Smart Basketball Camps Team Up

Any young organization that experiences explosive growth invariably runs into many of the same problems. Building a scalable infrastructure while offering the best customer service are two challenges that faced Shaka Smart Basketball Camps, LLC as they strive to provide the best basketball experience for its campers and parents each summer since 2009 in Richmond, Virginia.

Smart is the head coach of the surging Virginia Commonwealth basketball program, the college basketball Cinderella story of 2011 and the success of his program has driven greater attendance to his summer camps in the Central Virginia region. With the added campers came scalability concerns for the staff in 2011 and 2012 and with more campers in store for 2013, the team had to huddle to come up with a winning result.

The mission of the basketball camp is to “encourage self-esteem and a love of fitness and nutrition” through a fun and informative week-long camp. As part of that mission, the camp’s curriculum teaches its attendees lessons about practical financial responsibility through a camp “bank account” that the children can use to purchase items from one of a number of the camp’s stores.

For Shaka’s staff, registering each camper, tracking who was in attendance each day and the unique challenge of monitoring the balances of each camper’s “bank account” across a five day basketball camp presented problems. Keeping up with the swelling number of campers over the past few years with a paper process found difficult and time consuming.

After last year’s camp, the staff needed a better way to supply right-time account balances, perform inventory control, provide updated balances to the staffers at the camp store and then offer reports for parents to reinforce the camp’s lessons of fiscal responsibility. To accomplish this, Shaka Smart Basketball Camps teamed with us at Shockoe Mobile Application Development to come up with a solution for the staff. To meet the team’s needs, we created a back-end database built to automate inventory control, track camper attendance, and track camper account balances all while providing an intuitive front-end display on iPads for camp staff to quickly learn and efficiently access the system. The devices were then integrated with Square payment software to make payments as simple and accessible as possible.

Last week was the camp’s first of three weeks and was the first real test of how smooth and scalable the application could be for the staff. So how did it work?

With the new application, registration went more quickly, attendance was efficiently tracked and there were no discrepancies in balances at the various camp stores based around the location of the camp, Virginia Commonwealth University’s Siegel Center. Reports were then provided back to parents on how their children spent their allowances during camp for additional reinforcement of sound financial management.

With the new application in-hand, Smart’s staff of administrators and educators were able to focus on what matters most, teaching children the fundamentals of basketball while establishing a healthy lifestyle resulting in a win-win for everyone involved.


Founded in April of 2009, the mission of Shaka Smart Basketball Camps LLC is to encourage high self-esteem and a lifelong love of fitness and nutrition through fun, high-quality basketball instruction and games. The camp provides a wealth of experiences for its attendees including both current and former Virginia Commonwealth University basketball players, youth coaches from around the nation, conditioning experts, medical staff and VCU graduate assistants from the Center for Sports Leadership. This year, the camp will run for three week long sessions providing instruction and learning to hundreds of campers. Learn more about the camp at