How to Document an Existing API Using Swagger

How to Document an Existing API Using Swagger

A few months ago, I jumped into an exciting project with my teammates at Shockoe. My expertise was needed for testing and debugging, which led me to begin working with the middleware API. In this particular case, the project presented new challenges, APIs that had already been set up by a client and the fact that this was an ongoing project, which meant that it was still changing and being updated regularly.

Adding features to the project required new resources and endpoints. Updating how error messages were handled required updating API responses and, after a certain point, enough changes had been made that the documentation had to be remade.

I came across two popular resources for documenting APIs: Swagger and RAML.

RAML takes a top-down approach to defining APIs, which means that you first define what your methods will be and then how they will work. By using a methodology that involves defining the system and providing the details later, this works well while developing an API since it goes alongside the developing mindset.

On the other hand, Swagger uses a bottom-up approach, defining each method one at a time. In my case, I decided to use Swagger because this methodology works better when documenting an already existing API. Swagger basically allows you to work through each API call and document as you go.

Swagger uses JSON objects to describe its API request and responses. For example, an API with a pets endpoint for a GET request could look like the example below, with a separate items object defined later in the document to increase readability.

Since I had already been using Postman to test each call, I had a quick way to go through each call, document the request and response, and move on to the next step.  Because the API only returned JSON the similarity between formatting for the response and the swagger response documentation led me to create a simple conversion program. This allowed me to easily input the response from an API call and output a properly formatted swagger documentation for it.

At this point the process became simple, look up the API request, document the new path in Swagger (which can be made easier by copying an existing path and changing the different fields), make a request to get a sample response, put the response in the converter and copy it over to the swagger file. From here, I was able to quickly finish documenting the API and set up a process to easily keep it updated or document another finished API.

Documenting a project can seem daunting when you need to do it all at once, but it is still a very important part of the process. In the end, I found Swagger to be a great API document that was easy to use for my project needs. I was able to quickly pick it up and get started. Adding each method was simple to do after the first one and it helped give an overview of all the available methods while showing example requests and responses along the way. I hope this helps you when you find yourself documenting existing APIs, leave us a comment if you have any questions or insight on how to use Swagger.

Editor: Interested in learning how our developers debug applications? You can read more from our last week’s blog. For our blog on API Description Language for the Enterprise click here.

Interested in what it would take to kick off your project?

Our experience and core services include strategy & transformation, user experience & design, mobile application development, and API management.

Could Hyperloop be the best Appcelerator feature yet?

Could Hyperloop be the best Appcelerator feature yet?

I recently took the time to checkout out Appcelerator’s Labs page where they allow users try out pre-release software. There are some interesting projects here, but I spent most of my time experimenting with Hyperloop, which could be the best new feature in Titanium.

The Hyperloop module will allow developers to interact with native API’s directly from their JavaScript code! Titanium already covers the majority of native API’s, but some more complicated projects need API’s that are not covered. Hyperloop will make interacting with the API’s not directly covered by Titanium much easier than it has been in the past.

Hyperloop will also make it easier to use third party Android libraries or iOS cocoapods. These can be added to a Titanium project, and Hyperloop will make the library available inside the JavaScript code without having to write a native module.

There is a lot of work that goes into developing and maintaining a native module because there are two different code bases. Debugging native modules can be more time consuming when going back and forth between the native module and the Titanium project. Since Hyperloop will put the native API interaction alongside the rest of the Titanium JavaScript code, maintaining the project should be much easier.

I think Hyperloop will be one of the best additions to Appcelerator’s arsenal, but there are some improvements I would like to see before its final release.

In a typical project using Hyperloop, I might write something like this if I needed to require some native Android API’s:

A more complicated example that uses a lot of native API’s could look like this:

This looks a little messy. I would like to see ES6 style destructuring and object matching. That could make the code above look something like this:

This could make the code much more readable as classes from the same package will be grouped together, and var’s with matching names will be created automatically.

Class inheritance is another ES6 feature that would be a good addition for Hyperloop. Inheritance is a big part of the Objective-C and Java programming languages. This allows the developer to modify the class’s function’s, but the original function definition is still available by calling the super() function. I think Hyperloop can work without class inheritance, but being able to extend the native classes from within the JavaScript code would be a huge advantage.

I, personally, cannot wait to start using Hyperloop in my daily development here at Shockoe. I think it will not only let me make more powerful applications that harness more native API’s, but it will also save me time when using third party libraries. Fewer native modules means less code to maintain down the line when operating systems and SDK’s are updated.

I think Appcelerator has a great product in development, and with a few improvements, it will be invaluable to the Appcelerator developer community.

Mock Data and You

Mock Data and You

A mobile app developer’s dream is to walk into the office on day one of the first sprint of a new project and find the bounty of a fully functioning web API laid before him. He locks eyes with the developer next to him and sheds a single tear, not out of sadness or even joy, but with the grim satisfaction that both recognize as meaning “We’ve earned this.” Before them stands a pillar of perfection, an endpoint standard that will never change throughout the lifetime of the project, a bastion of coding best practices that meets their exact needs. Sadly, this is usually not the case: API teams get delayed, endpoints get altered to an unrecognizable state, and servers go down for annoying, albeit necessary, maintenance. Thankfully there are several ways to simulate incoming data in the meantime.

Here at Shockoe we handle incoming application data in three stages: mock fixture data, mock API data, and client API data.

Mock fixture data is a simple tool: static properties that allow you to quickly lay out screens with plausible values. Pages load instantly, because you’re not making any API calls, which is useful in the early stages of constantly going back and forth between the code and the app to change a border width or a shade of blue.

Once the screens are laid out, however, it’s useful to make actual API calls that rely on Wi-Fi and mimic the latency of calls to the client’s servers. This can reveal cases in which the app should utilize blocking UI to make sure the user can’t click on anything else while the API call is going out, or instances where there should be an activity indicator if the wait is going to be long.

Here at Shockoe we like to use Mockaroo, a fantastic resource for mock data that can be accessed through their API. The steps are simple: create a schema for your data and populate it with the types of properties you want returned (eg Person, with firstName, lastName, address, phone). Then make a call to Mockaroo’s API, specifying the schema and return type in the parameters. These parameters range from city names and email addresses to formulas and regular expressions. The most important aspect of this data is the randomness. Programmers filling in fixture data might have never dreamed that they would see their city property populated with Parangaricutirimicuaro, but when Mockaroo sends it, it exposes the fact that the field is too short or overlaps with other labels.

Another useful Mockaroo feature is missing data. You can specify what percentage of the time you want a certain field to return a blank value, so you can be sure that your app handles this from both a UI and functionality perspective: making sure the interface still looks good when certain properties are missing, as well as not crashing.

The end goal is to make the process of integration to a client’s API as smooth as possible. Indeed, a project manager’s pillar of perfection would mean this process is a simple URL swap. And while this is almost never the case, the Shockoe process leaves us more than adequately prepared to face that crumbling pillar and get through with integration unscathed, no matter what comes our way.

Express HTTP servers with Node.js

Express HTTP servers with Node.js

This week we continue our series on building better web services with Node.js by taking a look at the Express web application framework. Unfamiliar with Node? Take a look at last week’s blog post to find out what you’re missing.

Why Express?

Express provides all of the tools you need to immediately be productive working on web applications in Node.js.  While Node provides a number of built-in networking APIs, they are as a group cumbersome and somewhat unintuitive to work with, forcing you to do a large amount of repeated setup any time you want to set up another endpoint, route, port, or HTTP verb.  This is where Express comes in.  We can use Express to handle all of the repeat setup tasks associated with building a HTTP server, and spend more time focus on getting the core logic and UI of our applications correct.

Getting set up

To start using Express, just do a normal NPM install:

yourname@domain > npm install express -g

Optionally (but highly encouraged!), you can also install the Express-generator package to provide a convenient terminal command to set up a well-structured Express project. Again, NPM is the bomb.

yourname@domain > npm install express-generator -g

We’ll probably have to install some more dependencies for the generated project later, but this is enough to start rolling.

Making your Express project

Now that Express and Express-generator have been set up, let’s generate an Express project. Express-generator comes with a number of command line options, but for the most part uses sensible defaults. One notable point of contention is the default template engine: Jade. Express-generator can also generate projects using EJS, Handlebars, or Hogan.js. While I personally prefer Handlebars for most purposes (it does a great job of separating logic and templates in a clean fashion), for the purposes of this article I’ll be using EJS in interest of staying as close to Javascript as possible. To generate an Express project using EJS templates, navigate to your project’s intended parent directory and run:

yourname@domain > express --ejs {{YOUR PROJECT NAME GOES HERE}}

This will place a simple Express project configured to template its web pages using EJS in the specified subdirectory. Next, you’ll need to navigate to that directory and run NPM install to resolve your dependencies.

yourname@domain > cd {{YOUR PROJECT NAME GOES HERE}}
yourname@domain > npm install

Note that you don’t provide any arguments to npm install this time. This means that npm will look inside the Node project’s configuration file, package.json, for a list of dependencies to install. This is the same way that you would download dependencies if one of your friends or coworkers gave you a link to a Node.js project to collaborate on. Specifying dependencies like this allows you not to include your dependencies on the project itself, and gives other developers an easy way to resolve these dependencies.


Cool… So what just happened?

Express generator sets up a very simple Express project, configured to perform common backend tasks like parsing requests, setting up routes and complex endpoints, and serving a public directory for client-side assets. Let’s take a look at the Express application’s entry point, app.js.

There’s a lot going on here, so let’s break it down by chunks. In the spirit of providing an overview of Node.js, let’s look at the require statements first.

Require is an extraordinarily powerful tool for looking up libraries and external dependencies within a Node.js project. When you pass a string to require, it will try to look up a properly packaged CommonJS or native Node.js module, keep a reference to the loaded module, and return it to the caller. Then, if you try to load the library again, the cached reference will be returned rather than attempting to load a fresh instance. This lets you efficiently manage dependencies and script loading with almost no effort.

Taking a closer look at the require block, you will notice it is split into two halves, based on the type of require that is happening. The first group is loading pre-packaged Node modules, installed using npm. Require first checks your project’s node_modules folder for these files, and then checks the global npm cache. Express is a good example of a library that will be installed globally (we supplied the option -g to npm install earlier, meaning the library was installed at a system level), whereas the other modules were downloaded to the node_modules folder by NPM install, and will be loaded from there.

The second group of requires loads libraries local to the current project. This type of require uses normal relative file paths (. represents the current directory, .. is the parent), and will attempt to load javascript files relative to the current file’s parent directory. These statements load a pair of router files which will define a set of endpoints. We’ll take a closer look at these later.


Now let’s start serving

Now that we’re done resolving our dependencies, we can actually start initializing our Express project. Let’s take a look at the next couple of lines in app.js.

We do a couple of important set up tasks here. The obvious big one is creating our Express instance, but we also set up how we will be rendering our web pages and where to find them (in the views subdirectory), and begin to set up the actual logic of the Express application.

Express is built on the concept of middleware functions, small little functions that are each capable of processing or responding to a request, or sending the request to the next piece of middleware capable of handling your request. You can use middleware to set up generic handling for all endpoints, specific handling for endpoints matching some route, or specialized handling for an individual endpoint. This is done primarily via the use function.

use has a couple of nuances that allow you to set up complex server-wide handling easily. Looking at the use calls, you can see that some calls are passed a single argument, while others receive two. The single argument calls apply the selected piece of middleware to every request that reaches the server, while the two args version allows you to restrict the middleware to operating on a single route. We set up our pre-generated routes at the end of this call.

Next, we set up a couple of custom middleware functions.

Again, there’s a lot going on in this series of calls, so let’s break down what’s happening. Middleware is ultimately some function that will be executed, and these functions always receive the same three arguments, in the same order. The first argument is the request object. The request object is where you will look to get information about an individual request coming to your server. The request can be preproccessed by middleware, and since all pieces of middleware receive the same object reference for a given request, all other pieces of middleware will receive this information. That’s how the bodyParser library from earlier works. It allows JSON payloads to automatically be parsed and turned into a request body object, rather than plain text.

The second argument is the response object. The response object is used to do things like set HTTP status codes or response headers, report request progress back to the client, and actually send your response to the client when you are done handling the request.

The third argument is the next object. This is a reference to the next function in the middleware stack. You can call next to indicate that you are done attempting to handle this request, and that the next piece of middleware should make its attempt at handling. Middleware is run in order of execution, so the first thing that you app.use that matches the request’s path will be executed first, proceeding until either a response is sent or we run out of middleware to attempt to handle the request with.

There is a special case of middleware, namely error handlers. Error handlers must accept exactly four arguments (e.g. the function’s arity must be four). The last three arguments are the same as a normal piece of middleware, but an error object must be accepted as an additional first argument

Taking a look at the middleware added at the end of app.js, we have a single piece of standard middleware that will handle any routes that do not have middleware set up to respond, and will return a 404: not found message instead. The remaining middleware defines the development and production error handlers. As you can see these are four-argument functions, and are as such error handling middleware. Since the first piece of error middleware sends a response, in the dev environment we’ll return error messages with full stack traces, whereas in production we will merely notify the user that an error occurred.

Finally, on the last line of app.js, we export our Express application using the normal commonjs module.exports syntax.


So, about those route things…

Routes are a powerful tool that Express gives you to organize your code in a sensible fashion. Express wouldn’t make for a very good framework if we had to put everything in app.js, now would it? Let’s take a look at one of the generated route files that we required earlier.

As you can see, the way a router works is very similar to the way to how your base Express application works. You initialize an instance of your router, register some number of pieces of middleware, and export it for your consumer (almost always your Express application). One notable difference is that instead of using use to register middleware, we are using a function named get. get is a convenience function for registering middleware that only executes for requests with the HTTP verb GET. Similarly, there is a function for PUT, POST, DELETE, and the other HTTP verbs as well. The second nuance of routers is that they inherit the base route of where they are used by the application. For the users route for instance, all endpoints will start with /users, and then have any additional route tokens appended to the end. Additionally, you can nest routes as deeply as you need to to achieve your desired code organization.


Cool, but how do I start it?

NPM comes to the rescue again. Express-generator set up the project’s package.json with a pre-defined start command, which will set up your application to run on a port defined by your shell’s environment variables, or the default port of 3000. Take a look at bin/www to see how this works, or just run npm start to get your application running! Note that npm start will take control of your current console and use it for logging, you may want to direct your output to a file by running something of the form npm start >> log.txt


So where do I go from here?

So far we’ve covered setting up your node instance and configuring it to serve a web service using Express, but there’s still a lot of learning to be done! In the coming weeks, we’ll go over how to take this basic Node.js HTTP server and configure it to serve templated web pages using EJS, we’ll go over how to configure a simple noSQL database for Node.js using MongoDB, and we’ll pull it all together to write a simple push notification server, controlled through a RESTful API.

API Description Language for the Enterprise

API Description Language for the Enterprise

REST, now estimated to be used by 90% of new APIs, is great for groups within a corporation that can engineer their API as quick as lean startups. However, REST doesn’t fully meet the needs for enterprise businesses. With their slow-moving review and approval processes, internal politics, and sometimes a large user base, enterprise businesses require an API that can be simple to manage, quick design process and also be able to use standardization across potentially hundreds of other APIs across multiple line of business.

It’s been more than a couple years now since we started to look for API Description Languages that can fit small to large companies. Anything that can help the communication between Back-End and Front-End developers to be fast, easy and standardized, it’s been something that we always strive for. Swagger and RAML, both are API Description Languages that meets those needs.

It has been suggested that RAML may be the perfect solution for designing the next generation of enterprise level APIs, but as you might know enterprises still using SOAP based APIs and, of course, WSDLs are still the king.

With Swagger or RAML, at the end of the day, you are trying to describe an API by writing JSON, and since it’s JSON, there are many tools out there that will help you generate these documents via nice editors (GUIs).