I write software for business and pleasure, sometimes at the same time. Founded http://ProductPounce.com because missing a sale or seeing out of stock is lame.
107 stories
·
0 followers

Visualizing Docker Containers and Images

1 Share

This article was originally published on Daniel Eklund’s personal blog, and with his kind permission, we’re sharing it here for Codeship readers.

This post is meant as a Docker 102-level post. If you are unaware of what Docker is, or don’t know how it compares to virtual machines or to configuration management tools, then this post might be a bit too advanced at this time.

This post hopes to aid those struggling to internalize the Docker command line, specifically with knowing the exact difference between a container and an image. More specifically, this post shall differentiate a simple container from a running container.

Layers of the union file system

I do this by taking a look at some of the underlying details, namely the layers of the union file system. This was a process I undertook for myself in the past few weeks, as I am relatively new to the Docker technology and have found the Docker command lines difficult to internalize.


“This post is for those struggling to internalize the Docker command line.” via @ReverendTopo
Click To Tweet


In my opinion, understanding how a technology works under the hood is the best way to achieve learning speed and to build confidence that you are using the tool in the correct way. Often a technology is released with a certain breathless and hype that make it difficult to really understand appropriate usage patterns. More specifically, technology releases often develop an abstraction model that can invent new terminologies and metaphors that might be useful at first, but make it harder to develop mastery in latter stages.

A good example of this is Git. I could not gain traction with Git until I understood its underlying model, including trees, blobs, commits,tags, tree-ish, etc. I had written about this before in a previous post, and still remain convinced that people who don’t understand the internals of Git cannot have true mastery of the tool.

Image Definition

The first visual I present is that of an image, shown below with two different visuals. It is defined as the “union view” of a stack of read-only layers.

Breakdown of a Docker image

On the left we see a stack of read-layers. These layers are internal implementation details only, and are accessible outside of running containers in the host’s file system. Importantly, they are read-only (or immutable) but capture the changes (deltas) made to the layers below. Each layer may have one parent, which itself may have a parent, etc. The top-level layer may be read by a union-ing file system (AUFS on my Docker implementation) to present a single cohesive view of all the changes as one read-only file system. We see this “union view” on the right.

If you want to see these layers in their glory, you might find them in different locations on your host’s files system. These layers will not be viewable from within a running container directly. In my Docker’s host system, I can see them at /var/lib/docker in a subdirectory called aufs.

# sudo tree -L 1 /var/lib/docker/
/var/lib/docker/
├── aufs
├── containers
├── graph
├── init
├── linkgraph.db
├── repositories-aufs
├── tmp
├── trust
└── volumes

7 directories, 2 files

Container Definition

A container is defined as a “union view” of a stack of layers the top of which is a read-write layer.

Breakdown of a Docker container

I show this visual above, and you will note it is nearly the same thing as an image, except that the top layer is read-write. At this point, some of you might notice that this definition says nothing about whether this container is running, and this is on purpose. It was this discovery in particular that cleared up a lot of confusion I had up to this point.

Takeaway: A container is defined only as a read-write layer atop an image (of read-only layers itself). It does not have to be running.

So if we want to discuss containers running, we need to define a running container.

Running container definition

A running container is defined as a read-write “union view” and the the isolated process-space and processes within. The below visual shows the read-write container surrounded by this process-space.

Read-write container

It is this act of isolation atop the file system effected by kernel-level technologies like cgroups, namespaces, etc that have made Docker such a promising technology. The processes within this process-space may change, delete or create files within the “union view” file that will be captured in the read-write layer. I show this in the visual below:

Read-write container with new file

To see this at work run the following command: docker run ubuntu touch happiness.txt. You will then be able to see the new file in the read-write layer of the host system, even though there is no longer a running container (note, run this in your host system, not a container):

# find / -name happiness.txt
/var/lib/docker/aufs/diff/860a7b...889/happiness.txt

Image Layer Definition

Finally, to tie up some loose ends, we should define an image layer. The below image shows an image layer and makes us realize that a layer is not just the changes to the file system.

Image layer

The metadata is additional information about the layer that allows Docker to capture runtime and build-time information, but also hierarchical information on a layer’s parent. Both read and read-write layers contain this metadata.

Read and read-and-write layers have metadata

Additionally, as we have mentioned before, each layer contains a pointer to a parent layer using the Id (here, the parent layers are below). If a layer does not point to a parent layer, then it is at the bottom of the stack.

Parent layers

Metadata location

At this time (and I’m fully aware that the Docker developers could change the implementation), the metadata for an image (read-only) layer can be found in a file called json within /var/lib/docker/graph at the id of the particular layer: /var/lib/docker/graph/e809f156dc985.../json where e809f156dc985... is the elided id of the layer.

The metadata for a container seems to be broken into many files, but more or less is found in /var/lib/docker/containers/<id> where <id> is the id of the read-write layer. The files in this directory contain more of the run-time metadata needed to expose a container to the outside world: networking, naming, logs, etc.

!New Call-to-action

Tying It All Together

Now, let’s look at the commands in the light of these visual metaphors and implementation details.

docker create <image-id>

Input (if applicable):

Unioned Read-Only File System

Output (if applicable):

Unioned RW File System

The ]docker create command adds a read-write layer to the top stack based on the image id. It does not run this container.

Docker create command

docker start <container-id>

Input (if applicable):

Unioned RW File System

Output (if applicable):

Unioned RW File SystemI

The command docker start creates a process space around the union view of the container’s layers. There can only be one process space per container.

docker run <image-id>

Input (if applicable):

Unioned Read-Only File System

Output (if applicable):

Unioned RW File SystemI

One of the first questions people ask (myself included) is “What is the difference between docker start and docker run?” You might argue that the entire point of this post is to explain the subtleties in this distinction.

Whats the difference btw Docker Start and Docker Run

As we can see, the Docker run command starts with an image, creates a container, and starts the container (turning it into a running container). It is very much a convenience, and hides the details of two commands.

Continuing with the aforementioned similarity to understanding the Git system, I consider the docker run command to be similar to the git pull. Like git pull (which is a combination of git fetch and git merge), the docker run is a combination of two underlying commands that have meaning and power on their own.

In this sense it is certainly convenient, but potentially apt to create misunderstandings.

docker ps

Input (if applicable):

Your host system

Output (if applicable):

Inventory of running containers

The command docker ps lists out the inventory of running containers on your system. This is a very important filter that hides the fact that containers exist in a non-running state. To see non-running containers too, we need to use the next command.

docker ps -a

Input (if applicable):

Your host system

Output (if applicable):

All containers stopped or running

The command docker ps -a where the a is short for all lists out all the containers on your system, whether stopped or running.

docker images

Input (if applicable):

Your host system

Output (if applicable):

Top level images

The docker images command lists out the inventor of top-level images on your system. Effectively there is nothing to distinguish an image from a read-only layer. Only those images that have containers attached to them or that have been pulled are considered top-level. This distinction is for convenience as there are may be many hidden layers beneath each top-level read-only layer.

docker images -a

Input (if applicable):

Your host system

Output (if applicable):

All images in your system

This command docker images -a shows all the images on your system. This is exactly the same as showing all the read-only layers on the system. If you want to see the layers below one image-id, you should use the docker history command discussed below.

docker stop <container-id>

Input (if applicable):

SIGTERM

Output (if applicable):

Unioned RW File System

The command docker stop issues a SIGTERM to a running container which politely stops all the processes in that process-space. What results is a normal, but non-running, container.

docker kill <container-id>

Input (if applicable):

SIGKILL

Output (if applicable):

Unioned RW File System

The command docker kill issues a non-polite SIGKILL command to all the processes in a running container.

docker pause <container-id>

Input (if applicable):

cgroup freezer

Output (if applicable):

Frozen process space

Unlike docker stop and docker kill, which send actual UNIX signals to a running process, the command docker pause uses a special cgroups feature to freeze/pause a running process-space. The rationale can be found here, but the short of it is that sending a Control-Z (SIGTSTP) is not transparent enough to the processes within the process-space to truly allow all of them to be frozen.

docker rm <container-id>

Input (if applicable):

Container-id

Output (if applicable):

Removing a read-write layer

The command docker rm removes the read-write layer that defines a container from your host system. It must be run on stopped containers. It effectively deletes files.

docker rmi <image-id>

Input (if applicable):

image-id

Output (if applicable):

Removing a read layer

The command docker rmi removes the read-layer that defines a “union view” of an image. It removes this image from your host, though the image may still be found from the repository from which you issued a docker pull. You can only use docker rmi on top-level layers (or images), and not on intermediate read-only layers (unless you use -f for force).

docker commit <container-id>

Input (if applicable):

Container with a read-write layer or Top level read-write layer

Output (if applicable):

Read-only layer

The command docker commit takes a container’s top-level read-write layer and burns it into a read-only layer. This effectively turns a container (whether running or stopped) into an immutable image.

Docker Commit

docker build

Input (if applicable):

Dockerfile plus a Unioned Read-Only File System

Output (if applicable):

Unioned Read-Only File System with many more layers added atop.

The docker build command is an interesting one as it iteratively runs multiple commands at once.

Docker Build command

We see this in the above visual which shows how the build command uses the FROM directive in the Dockerfile file as the starting image and iteratively

  1. runs (create and start)
  2. modifies
  3. commits

At each step in the iteration a new layer is created. Many new layers may be created from running a docker build.

docker exec <running-container-id>

Input (if applicable):

Container with a read-write layer

Output (if applicable):

Exec Process

The docker exec command runs on a running container and executes a process in that running container’s process space.

docker inspect <container-id> or <image-id>

Input (if applicable):

Container with metadata or Image with metadata

Output (if applicable):

Metadata

The command docker inspect fetches the metadata that has been associated with the top-layer of the container or image.

docker save <image-id>

Input (if applicable):

Read layers with metadata

Output (if applicable):

Save_Tar

The command docker save creates a single tar file that can be used to import on a different host system. Unlike the export command, it saves the individual layers with all their metadata. This command can only be run on an image.

docker export <container-id>

Input (if applicable):

Read-write layer with metadata

Output (if applicable):

Export_Tar

The docker export command creates a tar file of the contents of the “union view” and flattens it for consumption for non-Docker usages. This command removes the metadata and the layers. This command can only be run on containers.

docker history <image-id>

Input (if applicable):

Top-level image-id

Output (if applicable):

Read-only layers of image-id

The docker history command takes an image-id and recursively prints out the read-only layers (which are themselves images) that are ancestors of the input image-id.

Conclusion

I hope you enjoyed this visualization of containers and images. There are many other commands (pull, search, restart, attach, etc.) which may or may not relate to these metaphors. I believe though that the great majority of Docker’s primary commands can be easier understood with this effort. I am only two weeks into learning Docker, so if I missed a point or something can be better explained, please drop a comment.


“Visualizing Docker Containers and Images” via @ReverendTopo
Click To Tweet


The post Visualizing Docker Containers and Images appeared first on via @codeship.

Read the whole story
jcblitz
3152 days ago
reply
Limerick, PA
Share this story
Delete

Clean Up Your Rails Helper File

1 Share
Latest post from Jordan Maguire over at the TFG blog, cleaning up your rails helper file.
Read the whole story
jcblitz
3184 days ago
reply
Limerick, PA
Share this story
Delete

A Customized Approach to HTTP Proxy Caching in Ruby

1 Comment
AcornCache is a Ruby HTTP proxy caching library that is lightweight, configurable and can be easily integrated with any Rack-based web application. AcornCache allows you to improve page load times and lighten the load on your server by allowing you to implement an in-memory cache shared by every client requesting a resource on your server. Please visit https://github.com/acorncache/acorn-cache for further information.
Read the whole story
jcblitz
3185 days ago
reply
I've been using rails low level caching to do something similar at my API client level, but this is an interesting approach
Limerick, PA
Share this story
Delete

Online GUI configurator for Puppet & Vagrant

1 Share

How do you pronounce PuPHPet?

The p is silent.

What do I need to get started with PuPHPet?

There are a few pre-requisites before you can begin your virtualized journey.

First, you must install the necessary tools. They're easy to get and will only take a minute:

Second … well, that's all you need, really.

I downloaded the zip file, now what?

Using the terminal, or cmd line, cd into your extracted directory and run $ vagrant up. This will kick-off the initial process.

Vagrant will download the box file, which can take a few minutes. It will only have to do this once, even if you create separate environments later on.

Then, it will hand control over to Puppet which will begin setting up your environment by installing required packages and configuring tools as desired.

You will then be able to ssh into your new box with $ vagrant ssh. You can also access any virtual hosts you created by editing your hosts file and creating entries for the Box IP Address and Server Name you provided during configuration (ex: 192.168.56.101 puphpet.dev <a href="http://www.puphpet.dev" rel="nofollow">www.puphpet.dev</a>). To shut down the VM, simply run $ vagrant halt. To start it back up run $ vagrant up again. Destroy it with $ vagrant destroy.

Further customizations with config.yaml

I have completely rewritten PuPHPet to take advantage of a built-in configuration tool for Puppet called Hiera. Simply look inside your downloaded folder and open puppet/config.yaml. This is the magical file that controls everything!

For example, if you want to have more OS-level packages installed (like vim, curl, git, etc) simply add more packages to server.packages. The exact same process exists for apache.modules.

To create a new Apache or Nginx vhost, simply copy/paste the one you may have created and customize to your needs.

Attention: if you see some sections with non-sensical array keys (ex: rIreAN33ne2a) that means they have to be unique! If you copy/paste to add new settings, you must ensure you change this unique key to some other random string! Bad Things Will Happen if you don't.

Learn you some Vagrant

You may want to learn the basics of Vagrant CLI by going here. You really only need to learn the very basics - that is what I created this app for!

How do I update my hosts file?

You will need to open and edit your hosts file with a text editor like notepad, sublime_text, nano, etc. The location of the hosts file varies by operation system.

Windows users could look here: c:\windows\system32\drivers\etc\hosts

Linux and Mac OSX users could look here: /etc/hosts.

Example Entry: 192.168.56.101 puphpet.dev <a href="http://www.puphpet.dev" rel="nofollow">www.puphpet.dev</a>

Read the whole story
jcblitz
3285 days ago
reply
Limerick, PA
Share this story
Delete

The Accessibility Cheatsheet | bitsofcode

1 Share

We all know that accessibility is important. The problem is, it is not always clear what exactly we can do to make our sites more accessible.

The Web Accessibility Initiative created some Web Content Accessibility Guidelines (WCAG) targeted at us, web content developers, to create more accessible websites. The WCAG contain some very useful information, and so I decided to condense the very extensive guidelines and highlight some practical examples of what we can do to implement them and make our websites more accessible.

Overview

The guidelines for accessible content have four overarching principles, each with more specific guidelines. You can click on the link to go to the relevant section of this article.

  • 1 - “Perceivable” - Information and user interface components must be presentable to users in ways they can perceive.
    • 1.1 - Text Alternatives
    • 1.2 - Alternatives for Time-Based Media
    • 1.3 - Adaptable Content
    • 1.4 - Distinguishable
  • 2 - “Operable” - User interface components and navigation must be operable.
    • 2.1 - Keyboard Accessible
    • 2.2 - Enough Time
    • 2.3 - Seizures
    • 2.4 - Navigable
  • 3 - “Understandable” - Information and the operation of user interface must be understandable.
    • 3.1 - Readable
    • 3.2 - Predictable
    • 3.3 - Input Assistance
  • 4 - “Robust” - Content must be robust enough that it can be interpreted reliably by a wide variety of user agents, including assistive technologies.

Principle 1 - “Perceivable”

1.1 Text alternatives

“All non-text content that is presented to the user has a text alternative that serves the equivalent purpose”

Plain text is the optimal format for any piece of content. This is because it can be used in many different formats to suit individuals with different disabilities. Therefore, it is important to provide a plain text alternative format for all content that is informative, i.e. not just decorative.

For images, use the alt attribute. The alternative text for an image should be as descriptive as possible, such that the same message is conveyed.

<img src="newsletter.gif" alt="Free newsletter. Get free recipes, news, and more." />

For audio and video elements, provide text transcripts. You can use the track element to specify timed text tracks for these media elements.

<!-- Format of the track element -->
<track kind="subtitles | captions | descriptions" src="path/to/file.vtt" srclang="" label="">

<!-- Example caption for an audio file -->
<audio controls>
  <source src="myaudio.ogg" type="audio/ogg">
  <track src="caption_en.vtt" kind="captions" srclang="en" label="English">
</audio>

<!-- Example descriptions of a video file in English and German -->
<video poster="myvideo.png" controls>
  <source src="myvideo.mp4" srclang="en" type="video/mp4">
  <track src="description_en.vtt" kind="descriptions" srclang="en" label="English">
  <track src="description_de.vtt" kind="descriptions" srclang="de" label="German">
</video>

For user interface elements, use labels. Labels can be used to provide context for information that may be otherwise very clear visualy. For example, where you may have a primary and secondary navigation that is styled differently, you use aria-label to distinguish between them.

<div role="navigation" aria-label="Primary">
  <ul><li>...a list of links here ...</li></ul> 
</div>
<div role="navigation" aria-label="Secondary">
  <ul><li>...a list of links here ...</li> </ul>
</div>

1.2 Alternatives for time-based media

“Provide alternatives for time-based media.”

Time-based media (audio and video) can be especially difficult for individuals with hearing or vision difficulties. In addition to providing a plain text alternative, it may also be helpful to provide an alternative time-based media version. For example -

  • Sign language as part of a video file
  • Alternative audio for video files
  • Video file with sign language as alternative for audio files

1.3 Adaptable Content

“Create content that can be presented in different ways (for example simpler layout) without losing information or structure.”

Write your HTML in a meaningful sequence. Your document should be readable and understandable without any CSS. Lay out your HTML the way the page is inteaded to be read and, where possible, make use of semantic markup.

<header>
  <h1>Site Title</h1>
  <nav><!-- links --></nav>
</header>
<main>
  <h1>Page Title</h1>

  <section>
  <h2>Section Title</h2>
  <p>Lorem ipsum dolor sit amet, <strong>consectetur</strong> adipiscing elit. Pauca mutat vel plura sane; 
  Vide, quantum, inquam, fallare, Torquate. Iam in altera philosophiae parte.</p>
  </section>
</main>
<footer>
  <!-- Site credit -->
</footer>

Meaningful information should not be conveyed solely via sensory characteristics. Sensory characteristics such as shape, size, visual location, orientation, or sound should not be the only way of conveying important information.

If you want to convey that a button will delete content, for example, make sure that this is also written in text, as shown on the left. Do not rely solely on colour and icons, as shown on right.

Red Button With Delete Text vs Red Button With Trash Symbol

1.4 Distinguishable

“Make it easier for users to see and hear content including separating foreground from background.”

Contrast ratio of text to background should be at least 4.5:1, preferably 7:1. You can use Lea Verou’s app to find the contrast ratio of your site’s colours.

Example of Contrast Ratio of 5:1 with white bg and text of colour rgb 110 110 110

Text should be easily resizable. Text should be resizable using the default browser mechanisms up to 200% without a loss of content or functionality.

Text Readable at 100% and 200% zoom

Use actual text instead of images of text. As mentioned before, plain text is the most accessible format to use. Therefore, it is counterintuitive to use images of text where plain text can be used.

Control over audio media should be provided. If any audio is played on a web page, provide a mechanism for users to control it with pause/play buttons and volume controls independent of the system volume controls.

Principle 2 - “Operable”

2.1 Keyboard accessible

“Make all functionality available from a keyboard.”

Many people are unable to navigate the web using a mouse. Therefore, all functionality should be operable through the standard keyboard interface without requiring specific timings for individual keys.

Ensure all functional elements have a clear focus state. For people navigating a website using the tab key only, focus states are how they know their location on the page. You can use javascript to add keyboard accessibility to static elements if needed.

Showing Focus States of Elements on Bitsofcode Website

Avoid keyboard traps. Tab through the content of your website from start to finish to ensure that the keyboard focus is not trapped on any of the content.

2.2 Enough time

“Provide users enough time to read and use content.”

Provide controls for timed content. For any interactions related to timing - including moving information, auto-updating, or page time-outs - you should implement at least one of the following safeguards -

  • Users can turn off the time limiit
  • Users can adjust time limit to at least 10 times the length of the default setting
  • Users is warned before time expires and given at least 20 seconds to extend the time limit with a simple action

HSBC Online Banking timeout message

2.3 Seizures

“Do not design content in a way that is known to cause seizures.”

Flashing light should not occur more than three times per second. Or, the flash should be below the general flash and red flash thresholds. You can use photosensitive epilepsy analysis tools or flash tests to test your site if you are unsure.

2.4 Navigable

“Provide ways to help users navigate, find content, and determine where they are.”

Provide a link for users to skip to the page’s main content. One of the first links on every page of a website should include a link for users to bypass repeated blocks of content, such as the navigation. This is especially important for pages that have large, multi-layered navigation menus. The link itself does not need to be visible when out of focus. For example -

<head>
  <style>
    #skip_to {
      position: fixed;
      left: 0;
      top: 0;
      opacity: 0;
    }
    #skip_to:focus {
      opacity: 1;
    }
  </style>
</head>
<body>
    <a href="#main_content" id="skip_to">Skip to Main Content</a>

    <nav> <!-- Navigations links here --> </nav>
    <div id="main_content">
      <!-- Main content here -->
    </div>
</body>

Titles should be meaningful. The title of the web page, as well as the page heading, section headings, and labels, should describe the topic or purpose of the page.

Link purpose can be determined from link text. As far as is possible, the purpose of a link should be able to be determined from the text that is within the anchor tag itself.

Proper placement of anchor tag around meaningful text. Anchor tag around the words click here vs around the words more posts about HTML

Provide more than one way to locate a web page. The same page should be accessible by more than just one link on one page. For example, a site could have -

  • Complete site map on a single page
  • Search function to access all content
  • Navigation with links to all pages

Provide information about the current location. It is useful to provide information about where the current page is in relation to the rest of the website. This can be achieved with any of the following -

  • Breadcrumbs
  • Site map
  • Highlighting the current location in navigation
  • Using the <link rel="index | next | prev | contents"> tag to specify the current page’s relationship to other pages

Highlighting the current location in navigation on Designer News

Principle 3 - ‘Understandable”

3.1 Readable

“Make text content readable and understandable.”

Specify the language(s) of the page. Specify the language of the current page on the HTML element, and any languages of specific parts.

<html lang="en"> <!-- Language of the page is English -->
<head>
</head>
<body>

<h1>Page Title</h1>

<p>Health goth American Apparel quinoa, jean shorts cray you probably haven't heard of them Schlitz 
occupy actually tofu distillery disrupt letterpress fixie. Slow-carb keytar hella, actually B
ushwick irony semiotics Portland readymade photo booth taxidermy pork belly small batch try-hard yr. 
Thundercats blog normcore, tousled American Apparel art party.</p>

<!-- Language of this blockquote is German -->
<blockquote lang="de">
  Da dachte der Herr daran, ihn aus dem Futter zu schaffen,
  aber der Esel merkte, daß kein guter Wind wehte, lief fort
  und machte sich auf den Weg nach Bremen: dort, meinte er,
  könnte er ja Stadtmusikant werden.
</blockquote>  

<p>Health goth American Apparel quinoa, jean shorts cray you probably haven't heard of them Schlitz 
occupy actually tofu distillery disrupt letterpress fixie. Slow-carb keytar hella, actually B
ushwick irony semiotics Portland readymade photo booth taxidermy pork belly small batch try-hard yr. 
Thundercats blog normcore, tousled American Apparel art party.</p>

</body>
</html>

Provide meanings of unusual words and pronunciations of difficult words. You can use the title attribute to provide the meaning of abbreviations and unusual words. For definitions, you can use the dl element to provide a definition list.

<!-- Providing meaning inline -->
<abbr title="Austin Rocks">Au5t1N r0xx0rz</abbr>

<!-- Using a definition list -->
<p>That was a <a href="#d-humblebrag">humble brag</a></p>   

<dl>
  <dt id="d-humblebrag">Humble Brag</dt>
  <dd>Subtly letting others now about how fantastic your life is while undercutting 
  it with a bit of self-effacing humor or "woe is me" gloss.</dd>
</dl>

Make content available at a lower secondary education reading level. Teenagers aged between 11-14 should be able to understand the content, even if specific terminology and concepts are new.

3.2 Predictable

“Make Web pages appear and operate in predictable ways.”

Consistent navigation. Navigation elements should be repeated in a consistent way throughout the website.

Consistent identification. Terminology and repeatable elements should appear consistently throughout the website.

No unprovoked changes of context. Any changes of context should only happen on request by the user. Things like redirects, popups and other similar interactions should be communicated clearly beforehand.

<html>    
  <head>      
    <title>The Tudors</title>      
    <meta http-equiv="refresh" content="0;URL='http://thetudors.example.com/'">    
  </head>    
  <body> 
    <p>This page has moved to a <a href="http://thetudors.example.com/"><a href="http://theTudors.example.com" rel="nofollow">theTudors.example.com</a></a>. 
    You will now be redirected to the new site.</p> 
  </body>  
</html>

3.3 Input Assistance

“Help users avoid and correct mistakes”

Provide labels and instructions - Provide labels or instructions for input elements. Where there is a commonly made error, provide suggestions that users can model their answers against.

Exmaple showing cues to help with picking a password

Error messages in simple language. Errors made should be described to the user in plain, understandable text, not error codes.

Error messages in plain text when picking a new password

Error prevention. Where a user is submitting information, at least one of the following must be true -

  • The submission of information is reversible
  • The answers is checked for errors and the user is given the opportunity to correct before submission.
  • The user is given the opportunity to confirm the information before submission

Principle 4 - “Robust”

4.1 Compatible

“Maximize compatibility with current and future user agents, including assistive technologies.”

Write valid code. Ensure the compatibility of your HTML by making sure it passes validations checks. Some important things validation checks look for include -

Specify the purpose of elements. Specify the name, role and value for user interface components where appropriate. For forms in particular, labels should be used where possible -

<form id="signupform">
  <label for="nm">Name</label> 
  <input id="nm" type="text" name="name" value=""> 
  
  <fieldset>
    <legend>Would you like to sign up?</legend>
    <input id="yes" name="request" value="yes" type="radio"> <label for="yes">Yes</label>
    <input id="no" name="request" value="no" type="radio"> <label for="no">No</label>
  </fieldset>

  <button type="submit">Submit</button>
</form>

Where the label cannot be used, you can use the title attribute instead -

<form id="searchform"> 
  <input type="text" title="site search" name="query" id="q" value=""> 
  <input type="submit" value="search">
</form>

aria-label can also be used to provide a label for a user interface element, where a label may not be present.

<div id="box">
   This is a pop-up box.
   <button aria-label="Close" 
           onclick="document.getElementById('box').style.display='none';" 
           class="close-button"> X </button>        
</div>

If you would like to read more about this, you can read the Web Content Accessibility Guidelines Reference, which goes into a lot more detail about how you can meet all the requirements.

I think the best thing that we can do is try to navigate the websites we create using only the mechanisms that people with disabilities use, such as screen readers. Doing this has really made me aware of things I should change on the sites I have made to make them easier to use.

Read the whole story
jcblitz
3320 days ago
reply
Limerick, PA
Share this story
Delete

AWS in Plain English

1 Share

Base Services

No matter what you do with AWS you'll probably end up using these services as everything else interacts with them.

EC2

Should have been called
Amazon Virtual Servers Use this to
Host the bits of things you think of as a computer. It's like
<a href="http://www.linode.com%22%2C%22It%27s" rel="nofollow">http://www.linode.com","It's</a> handwavy, but EC2 instances are similar to the virtual private servers you'd get at Linode, DigitalOcean or Rackspace.

IAM

Should have been called
Users, Keys and Certs Use this to
Set up additional users, set up new AWS Keys and policies.

S3

Should have been called
Amazon Unlimited FTP Server Use this to
Store images and other assets for websites. Keep backups and share files between services. Host static websites. Also, many of the other AWS services write and read from S3.

S3 in Plain English S3 Buckets of Objects

VPC

Should have been called
Amazon Virtual Colocated Rack Use this to
Overcome objections that "all our stuff is on the internet!" by adding an additional layer of security. Makes it appear as if all of your AWS services are on the same little network instead of being small pieces in a much bigger network. It's like
If you're familar with networking: VLANs

Lambda

Should have been called
AWS App Scripts Use this to
Run little self contained snippets of JS, Java or Python to do discrete tasks. Sort of a combination of a queue and execution in one. Used for storing and then executing changes to your AWS setup or responding to events in S3 or DynamoDB.

Lambda in Plain English

Web Developer Services

If you're setting up a web app, these are mostly what you'd end up using. These are similar to what you'd find in Heroku's Addon Marketplace.

API Gateway

Should have been called
API Proxy Use this to
Proxy your apps API through this so you can throttle bad client traffic, test new versions, and present methods more cleanly. It's like
3Scale

RDS

Should have been called
Amazon SQL Use this to
Be your app's Mysql, Postgres, and Oracle database. It's like
Heroku Postgres

Route53

Should have been called
Amazon DNS + Domains Use this to
Buy a new domain and set up the DNS records for that domain. It's like
DNSimple, GoDaddy, Gandi

SES

Should have been called
Amazon Transactional Email Use this to
Send one-off emails like password resets, notifications, etc. You could use it to send a newsletter if you wrote all the code, but that's not a great idea. It's like
SendGrid, Mandrill, Postmark

Cloudfront

Should have been called
Amazon CDN Use this to
Make your websites load faster by spreading out static file delivery to be closer to where your users are. It's like
MaxCDN, Akamai

CloudSearch

Should have been called
Amazon Fulltext Search Use this to
Pull in data on S3 or in RDS and then search it for every instance of 'Jimmy.' It's like
Sphinx, Solr, ElasticSearch

DynamoDB

Should have been called
Amazon NoSQL Use this to
Be your app's massively scalable key valueish store. It's like
MongoLab

Elasticache

Should have been called
Amazon Memcached Use this to
Be your app's Memcached or Redis. It's like
Redis to Go, Memcachier

Elastic Transcoder

Should have been called
Amazon Beginning Cut Pro Use this to
Deal with video weirdness (change formats, compress, etc.).

SQS

Should have been called
Amazon Queue Use this to
Store data for future processing in a queue. The lingo for this is storing "messages" but it doesn't have anything to do with email or SMS. SQS doesn't have any logic, it's just a place to put things and take things out. It's like
RabbitMQ, Sidekiq

WAF

Should have been called
AWS Firewall Use this to
Block bad requests to Cloudfront protected sites (aka stop people trying 10,000 passwords against /wp-admin) It's like
Sophos, Kapersky

Mobile App Developer Services

These are the services that only work for mobile developers.

Cognito

Should have been called
Amazon OAuth as a Service Use this to
Give end users - (non AWS) - the ability to log in with Google, Facebook, etc. It's like
OAuth.io

Device Farm

Should have been called
Amazon Drawer of Old Android Devices Use this to
Test your app on a bunch of different IOS and Android devices simultaneously. It's like
MobileTest, iOS emulator

Mobile Analytics

Should have been called
Spot on Name, Amazon Product Managers take note Use this to
Track what people are doing inside of your app. It's like
Flurry

SNS

Should have been called
Amazon Messenger Use this to
Send mobile notifications, emails and/or SMS messages It's like
UrbanAirship, Twilio

Ops and Code Deployment Services

These are for automating how you manage and deploy your code onto other services.

CodeCommit

Should have been called
Amazon GitHub Use this to
Version control your code - hosted Git. It's like
Github, BitBucket

Code Deploy

Should have been called
Not bad Use this to
Get your code from your CodeCommit repo (or Github) onto a bunch of EC2 instances in a sane way. It's like
Heroku, Capistrano

CodePipeline

Should have been called
Amazon Continuous Integration Use this to
Run automated tests on your code and then do stuff with it depending on if it passes those tests. It's like
CircleCI, Travis

EC2 Container Service

Should have been called
Amazon Docker as a Service Use this to
Put a Dockerfile into an EC2 instance so you can run a website.

Elastic Beanstalk

Should have been called
Amazon Platform as a Service Use this to
Move your app hosted on Heroku to AWS when it gets too expensive. It's like
Heroku, BlueMix, Modulus

Enterprise / Corporate Services

Services for business and networks.

AppStream

Should have been called
Amazon Citrix Use this to
Put a copy of a Windows application on a Windows machine that people get remote access to. It's like
Citrix, RDP

Direct Connect

Should have been called
Pretty spot on actually Use this to
Pay your Telco + AWS to get a dedicated leased line from your data center or network to AWS. Cheaper than Internet out for Data. It's like
A toll road turnpike bypassing the crowded side streets.

Directory Service

Should have been called
Pretty spot on actually Use this to
Tie together other apps that need a Microsoft Active Directory to control them.

WorkDocs

Should have been called
Amazon Unstructured Files Use this to
Share Word Docs with your colleagues. It's like
Dropbox, DataAnywhere

WorkMail

Should have been called
Amazon Company Email Use this to
Give everyone in your company the same email system and calendar. It's like
Google Apps for Domains

Workspaces

Should have been called
Amazon Remote Computer Use this to
Gives you a standard windows desktop that you're remotely controlling.

Service Catalog

Should have been called
Amazon Setup Already Use this to
Give other AWS users in your group access to preset apps you've built so they don't have to read guides like this.

Storage Gateway

Should have been called
S3 pretending it's part of your corporate network Use this to
Stop buying more storage to keep Word Docs on. Make automating getting files into S3 from your corporate network easier.

Big Data Services

Services to ingest, manipulate and massage data to do your will.

Data Pipeline

Should have been called
Amazon ETL Use this to
Extract, Transform and Load data from elsewhere in AWS. Schedule when it happens and get alerts when they fail.

Elastic Map Reduce

Should have been called
Amazon Hadooper Use this to
Iterate over massive text files of raw data that you're keeping in S3. It's like
Treasure Data

Glacier

Should have been called
Really slow Amazon S3 Use this to
Make backups of your backups that you keep on S3. Also, beware the cost of getting data back out in a hurry. For long term archiving.

Kinesis

Should have been called
Amazon High Throughput Use this to
Ingest lots of data very quickly (for things like analytics or people retweeting Kanye) that you then later use other AWS services to analyze. It's like
Kafka

RedShift

Should have been called
Amazon Data Warehouse Use this to
Store a whole bunch of analytics data, do some processing, and dump it out.

Machine Learning

Should have been called
Skynet Use this to
Predict future behavior from existing data for problems like fraud detection or "people that bought x also bought y."

SWF

Should have been called
Amazon EC2 Queue Use this to
Build a service of "deciders" and "workers" on top of EC2 to accomplish a set task. Unlike SQS - logic is set up inside the service to determine how and what should happen. It's like
IronWorker

Snowball

Should have been called
AWS Big Old Portable Storage Use this to
Get a bunch of hard drives you can attach to your network to make getting large amounts (Terabytes of Data) into and out of AWS It's like
Shipping a Network Attached Storage device to AWS

AWS Management Services

AWS can get so difficult to manage that they invented a bunch of services to sell you to make it easier to manage.

CloudFormation

Should have been called
Amazon Services Setup Use this to
Set up a bunch of connected AWS services in one go.

CloudTrail

Should have been called
Amazon Logging Use this to
Log who is doing what in your AWS stack (API calls).

CloudWatch

Should have been called
Amazon Status Pager Use this to
Get alerts about AWS services messing up or disconnecting. It's like
PagerDuty, Statuspage

Config

Should have been called
Amazon Configuration Management Use this to
Keep from going insane if you have a large AWS setup and changes are happening that you want to track.

OpsWorks

Should have been called
Amazon Chef Use this to
Handle running your application with things like auto-scaling.

Trusted Advisor

Should have been called
Amazon Pennypincher Use this to
Find out where you're paying too much in your AWS setup (unused EC2 instances, etc.).

Inspector

Should have been called
Amazon Auditor Use this to
Scans your AWS setup to determine if you've setup it up in an insecure way It's like
Alert Logic
Read the whole story
jcblitz
3320 days ago
reply
Limerick, PA
Share this story
Delete
Next Page of Stories