Stereoscopic 3D

Dump Those Silly Colored 3D Glasses

They say you’re not a true 3D enthusiast until you’ve got a shelf full of red/cyan and green/magenta anaglyph 3D glasses. I’m ready to dump mine in the waste bin, but there’s this little problem; two more shelves of anaglyph DVD, BluRay and VHS movies collected over the years. Soon the studios will start to release the latest blockbusters in full color BluRay 3D, but somehow I doubt they’ll find the time or budget to convert “Comin’ At Ya” or “The Stewardesses” from anaglyph to full color 3D. Who knows, maybe the full color film prints are lost forever.

For a couple of years there were two online purveyors of converted movies; they both did a creditable job, but recently dropped out of sight. My suspicion is that the films’ copyright holders got wise and shut them down for selling unlicensed copies. I doubt the sin had anything to do with 3D, just that they were selling unauthorized copies of the studios’ movies.

However, for those of us who purchase legitimate anaglyph 3D movies from Amazon or our local video store, the courts have strongly affirmed that we can watch them any way we choose, in private, whether we watch standing on our head, projected on a fishbowl, reflected off the water in our toilet, or even in full color 3D! In other words, if we have a gadget at home that translates anaglyph movies into full color and we use it solely to privately watch the legitimate 3D movies we own, we’re well within our legal rights.

And here’s how!

Anaglyph movies come in a number of flavors. This post deals with red/cyan and green/magenta anaglyphs which cover the majority of releases available. The goal is to create full color left and right video streams from these anaglyph releases. To keep things simple, this post describes a method to output either side-by-side or top-bottom formatted full color stereoscopic video in a single file. The tools used are open source: AVISynth and VirtualDubMod, as is the code I’ve contributed (refer to the license that accompanies the code.) To keep things simple, there’s download links for everything needed at the end of this post.

This post is not a cookbook for the uninitiated. If you are comfortable with AVISynth and VirtualDubMod, or are willing to learn these two programs, you’ll be fine. You don’t need to be a coder, you just need to be experienced in using these programs. Similarly, you need to have a basic knowledge of 3D formats and some experience critically viewing 3D. If you’re a 3D enthusiast or a professional, you’re already there.

Prepare your computer

First thing is to get your system ready. Sadly there’s no AVISynth for ‘nix or Mac, so we’re talking PC with XP, Vista, or Win7. The conversion process should run on any PC capable of running these OS’s, but it won’t be much fun without a Dual Core and 2gb+ of memory. I’ve done my testing on an I5-750 with 4gb. Your mileage may vary.

At a minimum, install AVISynth, VirtualDubMod, and the K-Lite Codec Pack. Since AVISynth only ingests AVI, WMV, MPG, and MKV files, you’ll almost certainly want to install software that converts your DVDs and BluRays to one of these formats. For DVD, I recommend VOB2MPG (and if you need to access encrypted DVDs, add DVD Decryptor.) For BluRay, I recommend MakeMKV.

The MOST IMPORTANT thing in getting your DVDs and BluRays into a computer format is that the ripping process should not re-encode (i.e. decompress and then recompress) your video. Anaglyph colors are very delicate and each compression generation makes your 3D conversion ever so much more difficult. VOB2MPG3 and MakeMKV both can do the conversion without recompression.

How the code works

The actual 3D anaglyph-to-full color conversion logic is contained in two AVISynth scripts (.AVS files.) The core logic is in AnaExtract.AVS; unless you’re a coder, you probably won’t fool with this. Just tuck the file away in a safe folder somewhere. The other script is where you set all the parameters for a particular movie you’re converting. I like to name these files: XX-DeAna.avs, where the XX is replaced by the name of the movie. However, you can name each copy of the file anything you like.

The video near the top of this post demonstrates how to use AVISynth, VirtualDubMod, and the parameter file to convert an anaglyph movie. While that should be enough to get you going, the gory technical details begin now. I’ll get back to the parameter .AVS file later on, but first let’s pick apart the actual core processing module: AnaExtract.AVS. You probably will never have to change anything in this section of code; it’s always “included” with your parameter script which is described later in this post. But, if you want to see what goes on under the hood, keep reading this section.

Yes, the code is open source

# AnaExtract.avs
# Tone at VRtifacts.com
# V 0.9 June 27, 2010
# Copyright (C) 2010 Tony Asch
# This program is free software: you can redistribute it and/or modify
# it under the terms of the GNU General Public License as published by
# the Free Software Foundation, either version 3 of the License, or
# (at your option) any later version.
# This program is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# GNU General Public License for more details.
# You should have received a copy of the GNU General Public License
# along with this program. If not, see

The main points are: There’s no warranty, feel free to modify and distribute, but you must include the source in any distribution.

Find the files we want to convert

# Snag our video files
vidL = DirectShowSource(anaglyphName)
vidOrig = DirectShowSource(anaglyphName)
vidsound = DirectShowSource(SoundName, audio=true, video=false)

The parameter AVS file identifies the video file paths and names. Here we open a couple of copies of the anaglyph movie and a copy of the audio only.

Deal with different anaglyph formats – red/cyan or green/magenta

# We need these for later
# Skip if we've already got a Mono Left source
# Red/Cyan?
(inputFormat == "RC") && (isMono != "monoLeft") ? Eval("""
vidL = ConvertToRGB(vidL)
vidL = MergeRGB(vidL.ShowRed("YV12"), vidL.ShowRed("YV12"), vidL.ShowRed("YV12"))
vidL = Greyscale(vidL)
""") : Eval(""" """)
# Green/Magenta?
(inputFormat == "GM") && (isMono != "monoLeft") ? Eval("""
vidL = ConvertToRGB(vidL)
vidL = MergeRGB(vidL.ShowGreen("YV12"), vidL.ShowGreen("YV12"), vidL.ShowGreen("YV12"))
vidL = Greyscale(vidL)
""") : Eval(""" """)

# Skip if we've already got a Mono Right source
# Red/Cyan?
(inputFormat == "RC") && (isMono != "monoRight") ? Eval("""
vidR = ConvertToRGB(vidOrig)
vidR = MergeRGB(vidR.ShowGreen("YV12"), vidR.ShowGreen("YV12"), vidR.ShowGreen("YV12"))
vidR = Greyscale(vidR)
""") : Eval(""" """)
# Green/Magenta?
(inputFormat == "GM") && (isMono != "monoRight") ? Eval("""
vidR = ConvertToRGB(vidOrig)
vidR = MergeRGB(vidR.ShowRed("YV12"), vidR.ShowRed("YV12"), vidR.ShowBlue("YV12"))
vidR = Greyscale(vidR)
""") : Eval(""" """)

Since many 3D DVDs come with both a 2D and a 3D copy of the same movie, it is often the case that the 2D version is a perfect reproduction of either the left or right eye view. If that is the case, we’ve cut our work in half, and are assured of one eye’s view being perfect. The block of code above separates the two anaglyph colors and produces independent left and right video streams, albeit that it’s primarily luminance information that we get once they are transformed to greyscale. If we already have something from the 2D copy, then we don’t need to extract an imperfect copy of that eye’s view from the anaglyph. Cool! In the case of RC, I found the blue channel to be quite noisy and even though cyan is a combination of blue and green, I just grabbed the green and used it as luminance information. If you don’t have a 2D version you will set the parameter isMono = “monoNone”, the parameter vidOrig will be set to “nothing” or some other string, and will not be used by any of the code.

Prepare the full color information

# Prepare color information by resizing the image down and back up,
# creating a blurred version of the source for color restoration
# Note: the decimateHoriz/Vert values are percentages (1-100)
# defining the size of the shrunken version. Small numbers give more blur
# Code has a hack to make sure the shrunk version dimensions are even numbers
vidColor = DirectShowSource(PureColName)
vidColor = BicubicResize(vidColor, Int((width(vidL) * decimateHoriz) / 200.0) * 2, Int((height(vidL) * decimateVert) / 200.0) * 2)
vidColor = BicubicResize(vidColor, width(vidL), height(vidL))

In the prior step we extracted greyscale copies of the left and right video streams. In preparation to re-coloring them, we need to calculate the colors to apply. Since the left and right videos are very similar, perhaps offset horizontally in places, and knowing that the human eye perceives much more detail in luminance than it does in chroma, we can apply color with a very blurred paint brush. Back in the parameter .AVS file we defined the name and path of the best video file to use for extracting color information. This would be the 2D version, if we have it; otherwise it will be the anaglyph version.

In AVISynth, the quickest way to blur is to resize a video to a very small size and then resize it back up to full size. For the convenience of users, the parameter file specifies this independently for the width and height, both in percentage terms (1-100.) The code with ” * 200 / 2″ scales from percentage to fraction and also assures that width and height will be an even number (required by AVISynth.)

Horizontal and Vertical color blurring are specified independently because anaglyph 3D images are offset primarily along the horizontal dimension and we need much more blurring horizontally than vertically. Nonetheless, many movies are shot with imprecise 3D cameras which are not perfectly aligned and may exhibit dissimilar lens distortions between the left and right lens. Vertical color blurring can help to cover these problems. Also, remember that we are not yet blurring the luminance, which is where we perceive image detail.

Correct for any leakage between left and right

# Undoubtedly a little bit of the wrong eye has leaked over during the Anaglyph encoding process
# Subtract out a bit of the wrong eye. Specified in percentage (0-100)
vidL = Overlay(vidL, vidR, mode="subtract", opacity=(leakCorrectL / 100.0))
vidL = Levels(vidL, 0, 1.0, Int(255 * (1.0 - (leakCorrectL / 100.0))), 0, 255, coring=false)

vidR = Overlay(vidR, vidL, mode="subtract", opacity=(leakCorrectR / 100.0))
vidR = Levels(vidR, 0, 1.0, Int(255 * (1.0 - (leakCorrectR / 100.0))), 0, 255, coring=false)

Often there is leakage between the left and right anaglyph images (and thus the left and right luminance images we’ve calculated here), produced by the mastering or encoding. This bit of code corrects for this leakage by subtracting the opposite eye in small amounts (specified in the parameter file.) Since the subtraction is clamped at zero, we then rescale to full range: 0-255 luminance (8 bits.)

Reduce color fringing

# Horizontally blur if needed - reduces fringing from excessive video peaking.
# Skip if = 1.0 (no blur). Otherwise larger value produce more blur
blurRight != 1.0 ? Eval("""
vidR = BicubicResize(vidR, Int(width(vidOrig) / (blurRight * 2.0)) * 2, height(vidOrig))
vidR = BicubicResize(vidR, width(vidOrig), height(vidOrig))
""") : Eval(""" """)
blurLeft != 1.0 ? Eval("""
vidL = BicubicResize(vidL, Int(width(vidOrig) / (blurLeft * 2.0)) * 2, height(vidOrig))
vidL = BicubicResize(vidL, width(vidOrig), height(vidOrig))
""") : Eval(""" """)

In an attempt to make DVDs look sharper, some anaglyph movies have excessive peaking, a video process to enhance detail. This leaves thin white fringes on the edges of color and luminance boundaries. The boundaries of our red/cyan or green/magenta areas are particularly susceptible. While there are doubtless more sophisticated ways to deal with these fringes, in the interest of processing (and coding) speed, an optional horizontal blur is applied to the offending luminance stream(s.)

Convert to YUV Color Space

# YUV color space for chroma operations
vidR = ConvertToYV12(vidR)
vidL = ConvertToYV12(vidL)
vidColor = ConvertToYV12(vidColor)

Although we’ve tried to keep our processing in RGB space as much as possible, AVISynth likes to paint in color on YV12 color space, and so we convert.

Restore the color

# Use our blurred color video to restore colors to greyscale Right and Left videos
# Skip if we've already got a Mono Right source
isMono != "monoRight" ? Eval("""
vidR = mergechroma(vidR, vidColor)
""") : (""" """)
# Skip if we've already got a Mono Left source
isMono != "monoLeft" ? Eval("""
vidL = mergeChroma(vidL, vidColor)
""") : (""" """)

At long last we apply color to the two luminance video streams, except in the case where we already have a perfectly serviceable stream from the 2D version off the DVD or BluRay.

Do some final color correction

# Final brightness and contrast tweak
vidR = Tweak(vidR, bright=tweakBrightR, cont=tweakContR, sat=tweakSatR, coring=false)
vidL = Tweak(vidL, bright=tweakBrightL, cont=tweakContL, sat=tweakSatL, coring=false)

Chances are good that we may want to do a bit of brightness, contrast, and color saturation correction on each of the two video streams.

Process an existing 2D version

# Maybe we already have a 2D version for one of the eyes
isMono == "monoRight" ? Eval("""
vidR = ConvertToRGB(DirectShowSource(monoName))
""") : Eval("""
vidR = ConvertToRGB(vidR)

isMono == "monoLeft" ? Eval("""
vidL = ConvertToRGB(DirectShowSource(monoName))
""") : Eval("""
vidL = ConvertToRGB(vidL)

If we had that handy 2D version, we avoided processing the left or right stream that corresponds to that 2D view. Now it’s time to insert our 2D video in its proper place.

Swap left and right video if needed

# Swap if needed
swapAnaglyph == "Yes" ? Eval("""
vidTemp = vidL
vidL = vidR
vidR = vidTemp
""") : (""" """)

The user might want us to swap the left and right video streams. Just doing this to be polite!

Build the side-by-side or top-bottom output

# Build the Side by Side (SBS) or Top Bottom (TB) combination of Left and Right video
outputFormat == "SBS_Left_First" ? Eval("""
StackHorizontal(vidL, vidR)
""") : Eval(""" """)

outputFormat == "SBS_Right_First" ? Eval("""
StackHorizontal(vidR, vidL)
""") : Eval(""" """)

outputFormat == "TB_Left_Top" ? Eval("""
StackVertical(vidL, vidR)
""") : Eval(""" """)

outputFormat == "TB_Right_Top" ? Eval("""
StackVertical(vidR, vidL)
""") : Eval(""" """)

Assemble both video streams into a single video, either side-by-side or top-bottom, with the choice of whether left or right comes first (left/top.)

Resize the output if needed

# Optionally resize the output video
outputResize == "Yes" ? Eval("""
BicubicResize(outputWidth, outputHeight)
""") : Eval(""" """)

The user might want to resize the final video smaller or larger. Here’s where it happens. The specified sizes are for the final combined single video stream.

Dub in the audio track

# Dub in the sound

And add the sound back in. If the proper codecs are installed, AVISynth should handle PCM, MP3, AC3, 5.1, 7.1, etc…

The Parameter File

The Parameter file is where you tell AVISynth where your Anaglyph movie is located in your file system, and where you set the conversion parameters for a specific movie. You’ll want to have a copy of this file for every movie you want to convert because you’ll be editing parameters that control the conversion of a single move. This is the file you load with File->Open Video in VirtualDubMod. It tells AVISynth how you want the movie converted. At the end of the parameter file, the core processing code is “included” and invoked. Although we walk through each parameter in the video at the top of this post, let’s have another run through.

Name the files

# Setup our input files
anaglyphName = "F:/3D Conversions/MovieFolder/YourMovie-GM.avi" # Anaglyph video
PureColName = "F:/3D Conversions/MovieFolder/YourMovie-2D.avi" # Video with color info (either Anaglyph or 2D)
monoName = "F:/3D Conversions/MovieFolder/YourMovie-2D.avi" # Possible 2D video for one eye, if not set to "nothing"
SoundName = "F:/3D Conversions/MovieFolder/YourMovie-2D.avi" # Video with the sound track we want

# Maybe we already have one eye's version in 2D already,
# i.e. the DVD or BR has both 2D and 3D versions
# Set to: monoRight or monoLeft or monoNone
isMono = "monoLeft"

This section tells the conversion process where all the input files are located. Most important is the anaglyph file. Second is the video with our color information. If we’ve got a 2D copy, that’s the best source for color; otherwise you can get reasonable color by using the anaglyph video. If we have a 2D video and we want to use it for either the left or right eye view, the third line is where to specify it. If not, you can put anything in this parameter (string.) The fourth file tells the conversion where to get the audio tracks. Typically this will be either the 2D version or the anaglyph version, but you could use any video file with a sound track.

WARNING: all of these files must be synced to frame accuracy. Sometimes the 2D version and the 3D version are not exactly the same, often being different in the opening credits. You should pre-process the files to be frame accurate before running this conversion!

The next parameter: isMono, tells our conversion whether we already have a 2D version that corresponds to one of the eye’s views. You can set this to monoLeft or monoRight to tell the conversion that you have a 2D copy that should be used as the left or right output. If you don’t have a 2D version or don’t want to use it, set isMono = monoNone.

Format the output file

# Swap eyes: either Yes or No
# Note: it is industry standard to put Red on the left eye for RC glasses
# and Green on the left eye for GM glasses
# It would be unusual to set this parameter to Yes
# since the un-swapped arrangement is either Red=Left or Green=Left
swapAnaglyph = "No"

# Output formatting:
# Choices are:
# SBS_Left_First, SBS_Right_First, TB_Left_Top, TB_Right_Top
# Meaning Side-by-Side (SBS) or Top-Bottom (TB)
# And choosing which eye is in which position
# This happens after the optional swap (above)
# and is somewhat redundant, but makes the eye choices clearer.
outputFormat = "SBS_Left_First"

# Resize the output video? Either Yes or No
# If set to No, then the output video is either
# twice the width of the input (for SBS)
# or twice the height of the input (for TB)
outputResize = "No"

# If we are resizing the output, specify the dimensions (Int)
# These dimensions apply to the stacked video size
outputWidth = 500
outputHeight = 200

This section deals with the output file. Although you name the output file in VirtualDubMod (File->Save As…), the layout of that file is determined here. If you want to swap the left and right videos in the output, set swapAnaglyph = “Yes”; otherwise it should be “No”.

Next you you need to tell the conversion whether the two output videos should be arranged side-by-side or stacked vertically. In addition you’ll indicate whether you want the left video first (i.e. on the left of a side-by-side, or top of a vertical stack), or the right video first. The swapAnaglyph parameter reverses the meaning of this order.

If outputResize = “No”, then the width and height of the output video is taken from the input videos (which all must be the same size!) For side-by-side format, the output will be twice as wide as the input, but exactly the same height. For a vertical stack, the output will be exactly as wide as the input, but twice as tall.

If outputResize = “Yes”, then you can specify the output width and height.

Be careful with very large dimensions, especially the width, as some codecs can’t handle really big sizes (>2k.)

Prepare the color information

# How much to blur the color information (Int or Float)
# This is done by shrinking the color video down in size
# and then resizing it back up to full resolution
# producing a blurred full resolution image
# The two decimate numbers are expressed as percentages
# i.e. a percentage of the full resolution to calculate
# the shrunk size. 100 means no shrink, 10 means 1/10 the
# resolution of the original, etc.
# Anaglyphs are only offset horizontally, so the color blur
# should be strong horizontally, but weak vertically
# For films where the cameras were misaligned vertically
# you will need to make the vertical blur greater.
decimateHoriz = 5.0 # Horizontal shrinkage
decimateVert = 20.0 # Vertical shrinkage - can usually be much bigger than decimateHoriz

As part of the conversion, the videos will be re-colored. This color is extracted from the file you assigned to PureColName. Because the anaglyph will have left and right videos shifted horizontally depending on the depth and strength of the 3D, we can’t be exactly sure where to re-color. Instead we blur the colors so that they approximate the proper location, relying on human’s perceptual weakness for color detail. The blur is achieved by shrinking down the color video and then expanding it back up to full size. There are separate parameters for horizontal and vertical shrink because 3D anaglyphs are displaced mostly in the horizontal direction and therefore more horizontal blurring is needed. The vertical blurring can help to compensate for vertical misalignment of the cameras, lens distortions and other vertical artifacts.

Deal with left-right leakage and color fringing

# In case one anaglyph eye has leaked into the other
# We can try to remove that leakage by subtraction
# Expressed as percentage (int or float) (-100 to 100) (0 means none)
leakCorrectR = 10 # Leakage of left into the right eye
leakCorrectL = 0 # Leakage of right into the left eye

# Option to horizontally blur the left and right videos,
# just before the color is restored (before optional LR swap)
# Helps remove some of the fringing that appears in poor DVD encodes
# Set to exactly 1.0 for no processing (faster!!),
# > 1.0 blurs... try 1.5 to 4.0
blurLeft = 1.0
blurRight = 2.0

Here we attempt to correct for leakage of left into right and visa-versa. If you see ghosts of one side appearing in the other side, try subtly adjusting these parameters. Of course, with a great BluRay anaglyph, no correction may be needed at all.

Some anaglyph videos, especially those from DVD or VHS sources will show some fringing around the anaglyph color boundaries after separation by this conversion process. The fringes can be minimized by a slight horizontal blurring.

A final round of color correction

# Final brightness and contrast adjustments
tweakBrightL = 0 # Left brightness, integer to add to each pixel (pixels are 0-255)
tweakContL = 1.0 # Left contrast adjustment (1.0 means no contrast adjustment)
tweakSatL = 1.0 # Left saturation adjustment (1.0 means no saturation adjustment)

tweakBrightR = -50 # Right brightness, integer to add to each pixel (pixels are 0-255)
tweakContR = 1.35 # Right contrast adjustment
tweakSatR = 1.3 # Right saturation adjustment

Often the anaglyph conversion process will leave the videos looking a bit washed out, or dark, or desaturated. Here is your opportunity to do some basic color correction on either the left or right video (or both!)

Load the actual conversion code

# Common code to do the conversion
# Make sure this file path points to
# the file on your system.
import("F:\3D Conversions\AnaExtract.avs")

And finally at the end of the parameter file, the actual conversion code is invoked. Just make sure the path to AnaExtract.AVS is correct for your system configuration. You can put it where ever you want, but you’ll need to change the final line of the parameter file to point to the proper location in your file system.


Anaglyph Conversion AVS Scripts



K-Lite Codec Pack (not required, but makes things much easier!)

AC-3 Sound Codec (not required, but the AC-3 codec in K-Lite doesn’t work with AVISynth)

MakeMkv – Rips BluRay to MKV with no re-encoding (currently free)



Stuff Yet to be Done

  • Convert red/blue anaglyph (trivial)
  • Convert ColorCode anaglyph (brutal!)
  • Better defringing
  • Output dual stream WMV
  • Output 2 separate files
  • Better leakage correction
  • Interlaced output
  • Optimize for speed
  • Port to After Effects, Premiere, Vegas, etc…
  • Linux version

It’s open source. You can help too!


Much of this code was inspired by prior AVISynth anaglyph converters from Olivier Amato and The Lone Wandering Soul. You probably will see some similarities in a few areas. That’s not coincidental. My thanks to both of them.

Sega VR – Mighty Barfin’ Power Rangers (we are the 40 percent)

Sega (all hail Sonic!): 1991 brought the announcement of Sega VR, a $200 headset for the Genesis console, a prototype finally shown at summer CES 1993, and consigned to the trash heap of VR in 1994, before any units shipped. Sega claimed that the helmet experience was just too realistic for young children to handle, but the real scoop from researchers showed that 40% of users suffered from cybersickness and headaches. It’s fair to say that Sega undoubtedly anticipated a sea of lawsuits; as one pundit in the industry put it: “It will be like the Pinto’s exploding gas tank.”

Perfectly capturing the annoying VR hype of the era is Alan Hunter’s (MTV) summer 1993 CES intro of Sega VR:

Money quote from a teen featured in the promo: “I thought I was going to have to wait till I was old… like 30, to get VR at home!” It’s now 2012, he’s closing in on 40, and still waiting.

Much more info can be found in Ken Horowitz’s 1994 review. Four games were produced especially for Sega VR, never to be released.

Here’s some sense of the much feared “realism” which provoked Sega to pull the plug on production:

Much to Sega’s credit, their VR fail was at least an original marketing effort, whereas later in the 1990’s, Nintendo’s Virtual Boy and Atari’s (Virtuality designed) Jaguar VR crashed and burned in much the same mode (although at far greater expense.)

Platypus Headsets?

The Science Channel interviews Jaron Lanier who shows off some wide field of view headsets from the late 80’s. Jaron feels like a platypus when wearing one of these JumboTrons. The narrator’s conclusion (and Jaron’s as well): The state of the art in VR hasn’t progressed too much further.

(A tip of the hat to Aphradonis over at mtbs3d.com for finding this little gem!)


3-Dimensions… For the First Time… 3-D FEELIES!!

MAD Magazine, June 1954:

DDD (3D) COMICS DEPT: By now you are familiar with 3-D Comic Books! You Know that some 3-D books enclose One set of 3-D glasses… You know some 3-D Books enclose Two sets of 3-D glasses! We are proud to announce that we of Mad are enclosing No sets of 3-D glasses for this, our first Mad story in… 3-DIMENSIONS!

Mad 3D

(click on Super-Mickey to view original article)


Vuzix Wrap 920 Augmented Reality Hands On

Recently I got my hands on brand new Vuzix video-see-through augmented reality HMD – Wrap 920 AR. It’s not quite a consumer product, it’s more focused on R&D in Augmented Reality field, there are small amount of information about it on the net and few people asked me for a review. Besides, I hope it will be interesting to many VR geeks on the planet, so here it comes. I want to give as much info as I can, but will try to keep this article short and don’t miss anything valuable.

NextGen ScrollGallery thumbnailNextGen ScrollGallery thumbnail

Editor’s note: Thanks go out to Mnemonic who put together this excellent review. While not strictly a VRtifact, future posts will draw the connection between the earliest augmented reality systems from the 80’s and 90’s and the Wrap 920 AR. Stay tuned…

Virtual Research VR-4 Adapted For Stereoscopic Augmented Reality

Virtual Research VR-4 Adapted For Stereoscopic Augmented Reality – circa 1993


Wrap 920 AR comes in really big box in comparison to compact package of VR920. Honestly, I didn’t expect it to be so big, but it’s because of bunch of different stuff inside, all packed in different sockets cut in safety foam. You can see list of included stuff on the sign, on top of the box. I liked that each part of the package is wrapped in some packet; head-tracking module comes in small acrylic box for example. Two AR markers on plastic-base included, which is nice to check Vuzix AR demos right away.

Drivers for the HMD are digitally distributed by Vuzix via Internet, so no installation disk is included.

NextGen ScrollGallery thumbnailNextGen ScrollGallery thumbnailNextGen ScrollGallery thumbnailNextGen ScrollGallery thumbnail

There is also solid travel casing included in the package, HMD with cables and VGA adapter fits there nicely:

NextGen ScrollGallery thumbnailNextGen ScrollGallery thumbnailNextGen ScrollGallery thumbnail

Adapters & cables

Wrap 920AR based on Wrap series of Vuzix HMD’s, and as any other Wrap HMD it supports various video sources for input. It’s includes composite video, iPod / iPhone connection and what we interested in – regular VGA for PC connection. To use VGA or composite video source you need to choose proper “adapter”, which are actually a control boxes. All currently available adapters (VGA, composite / iPod) included in the package of Wrap AR. Vuzix recently announced Wrap HDMI adapter, so I’m pretty sure it will be compatible. There’s also DVI to VGA adapter in the package for DVI-I connection.

NextGen ScrollGallery thumbnailNextGen ScrollGallery thumbnailNextGen ScrollGallery thumbnail

Wrap AR HMD have two cables, one from visor system (HMD itself) with some small specific connector which goes into control box, and another one from stereoscopic camera system – regular USB. VGA control box have VGA and USB connectors. USB needed to power up screens, provide audio and head-tracking.


Before start to work with Wrap 920AR in its full capacity we need to attach head-tracker, VGA adapter and optionally headphones. From the inside HMD have two small jack connectors for ear-plug style headphones and micro connector on the right brow for head-tracker module.

NextGen ScrollGallery thumbnailNextGen ScrollGallery thumbnail

Tracker connector and HMD-to-control-box connectors are very small, and looks fragile, so I would recommend assembling these with care, because they do need to put some force to plug in. But once connected, tracker (and control-box connector) securely stays in place, and don’t bother you.

Computer connection

NextGen ScrollGallery thumbnailNextGen ScrollGallery thumbnail

Wrap AR had straight-forward connection to my netbook and laptop, just plug in VGA and both USB cables and it’s ready. But first thing I noticed when tried to connect them to my (most powerful) stationery computer that cables are way to short!

Cables are much shorter than what was in VR920. I suppose Vuzix designed glasses to be connected to laptops, which sounds reasonable for Augmented Reality usage. Besides, in the full PDF manual (more than 100 pages!) I found mention that for stationery computers additional extension cables are need to be used.
In short – I used 3 meters VGA extension cable and two 2 meters USB cables to connect Wrap AR to my stationery PC, and in this configuration device work without any issues.

After connecting HMD, drivers were installed fluently on both systems: Windows XP 32 bit and Windows 7 64 bit, I also tried them on Windows Vista 32 bit, without any problems. HMD supports input video signal with resolution up to 1024 x 768 with 60 Hz refresh rate. It can be used both in clone and extend monitors modes like any external monitor.

Adjustments and ergonomics

Once connected to PC (and drivers are installed), buttons on the control box (adapter) provide control to the HMD adjustments menu. In the menu you can switch between few brightness and contrast presets, set your own preset, switch between stereoscopic and monoscopic modes, switch between different stereoscopic modes (side-by-side stereo pair, or various anaglyphs), swap left and right eyes in stereo mode, and adjust headphones volume.

On the back of the adapter you can find a little screw-driver, which is needed to adjust focus. Two knobs for focus adjustments you can find under rubber cover on top of the HMD. Knobs allow you to adjust between +2 to -5 diopters independently for each eye.

NextGen ScrollGallery thumbnailNextGen ScrollGallery thumbnail

My Wrap AR when first unpacked and connected to PC, had both knobs set to the very left position, I believe it was -5. I have perfect vision and was completely unable to focus screens this way. But with step-by-step slight adjustments, independently for each eye I was able to set glasses to very comfortable focus, and both “eyes” give me a sharp and clean image.

Also nose-piece can be adjusted for particular nose, and all nose-piece construction can slide inwards (or backwards) into HMD. Additional (spare) nose-piece and ear-plug nozzles of different sizes also included in the package.

It is possible to adjust optics between -5 and +2 diopters, but if you happen to have worse vision, you can find that it’s hard to use HMD with glasses while head-tracker attached, so probably you will need to use contact lenses.

Personally I found Wrap AR to be pretty comfortable, HMD is easy to wear and lightweight enough.

Screens, optics and visual quality

Wrap 920 AR have two true 752 x 480 LCD color screens which are located inside visual module casings and project image down into lens/prism system. In Wrap predecessor – iWear VR920 screens were located in front of the eyes, here Vuzix decided to go with another optical design. Optics has narrow 31 degree diagonal FOV with 100% stereo overlap, pretty much the same as with VR920 (32 degree), personally I didn’t feel the difference in the image size, but of course I would prefer if FOV was bigger, at least like in old VFX-1 with 45 degree.

All surroundings from the sides and from the bottom of the screen can be clearly seen, so it doesn’t block your view. This is bad in case of Virtual Reality use, but can be good when it comes to Augmented Reality applications, because you still see and aware of your surroundings outside of the screens, and on the screens, and in the same time inside screens you can see all AR “magic”.

NextGen ScrollGallery thumbnailNextGen ScrollGallery thumbnail

In comparison with VR920, picture quality inside Wrap 920 is much better! Picture is perfectly clear and sharp, and looks the same in both eyes; also color reproduction is much better.

As many of you know, iWear VR920 had “child problems” with screens system – image in left eye looked a little grainier (less colors) then from the right eye. When used for long periods of time – VR920 was able to overheat, and when that happened – image quality degraded, user begin to notice “scan lines” and even in case of extreme overheat, many users noted “black dot” which appeared in corner of the screen. Those artifacts thankfully weren’t permanent, and after cooling down HMD by unplugging USB chord image became normal again.

Wrap 920 AR don’t have any of these “child problems”. Glasses stayed almost all working day plugged into my laptop and just became normally warm. Image quality doesn’t change, so it’s really good that Vuzix solved such issues! This HMD can be plugged and powered on as regular monitor as long as you wish.

Also, from the first try I thought that my Wrap AR have bigger resolution screens. What I mean is when desktop is set to 1024×768 mode normally in VR920 I wasn’t able to read any standard-sized text in Windows, in Wrap AR I can operate Windows almost normally, text isn’t perfect but it’s readable in all the menus. Perhaps its benefit of better optics, higher quality screens, and better image scaling algorithm, but fact that picture is pretty readable even in 1024 x 768.


Main stereoscopic mode of Wrap 920 AR is side-by-side stereo-pair (it can be parallel or cross-eyed, both modes supported), which is very good from developers point of view, because it’s fairly easy to implement. But, in fact, gives less resolution per-eye. So if Wrap 920 AR is working in 1024 x 768, effective pixels will be 512 x 768 per eye, each scaled to 752 x 480 screens.

Crysis2 Stereo Side By Side

Crysis2 Stereo Side By Side

Crysis 2 in native stereo-pair mode

What is good that both eyes will be perfectly synchronized, and you can give full 60 frames per second for each eye! Stereo-pair support make glasses perfectly compatible with iZ3D and TriDef Ignition stereo-drivers, besides some new games like Crysis 2, and Avatar can output stereo-pair natively.

For those who compare characteristics to VR920, it had limitation of 30 FPS per eye in stereo, because of page-flipping stereoscopy; Wrap series don’t have such limitation.


Wrap 920 AR have two high-speed micro-cameras with 640 x 480 native resolution, which are actually capable of 60 frames per second (if your computer can handle). 60 Herz refresh rate gives really nicely smooth and almost perfect picture of the surroundings, which looks really good in stereoscopy! In Windows Wrap AR camera system recognized as two independent USB cameras.

NextGen ScrollGallery thumbnailNextGen ScrollGallery thumbnail

I’ve compiled small example from Vuzix SDK to check stereo cameras, and tried to make a photo to give you some impression of how it looks like. In reality picture in glasses looks really nice, and it blends good with surroundings. FOV of the cameras and screens optics FOV fit close to each other (not ideally, but close enough to give you proper illusion), which is good point for AR.

Picture quality of Wrap AR cameras not as nice as in new top-line Logitech webcam’s for example, but far better than most regular webcams can provide, and they work very fast when PC can handle it. Besides, exposition, white-balance and other stuff manually controllable from the camera drivers. They are also can be controllable from the inside of self-made software using Vuzix SDK.

Head Tracker

New Wrap 6TC head tracker besides 3 accelerometers and 3 magnetometers also includes 3 gyroscopes, which greatly improved orientation tracking when comparing it to VR920 head-tracker. It’s much less dependent on external magnetic influences, and do not require frequent re-calibrations. In fact I’ve calibrated them only once, and tracker work fine even week after, without any recalibration.

Head Tracker

VR920 AR Head Tracker

Using SDK we can receive full 6DOF information from the tracking. In this mode absolute 3DOF orientation information provided (Yaw, Pitch, Roll), along with relative 3DOF position (X, Y, Z). However Vuzix notes in current SDK that position info is in early beta and can’t be used for anything serious besides just “movement” detection. True I didn’t figured out how to use those X, Y, Z values with current driver and SDK, they seems pretty chaotic, but hopefully they will be useful in future with next drivers release.


Wrap 920 AR drivers supports Windows XP, Vista and Windows 7 both in 32 and 64 bit versions.

Wrap 920 AR shipped with “maxReality” license, which is an AR plugin for 3D Studio Max (2010 and 2011). I didn’t use this software so far, because I plan to use these glasses with my own software, but maxReality have few nice demos, which can be used to check all functions of Wrap 920 AR.

I’ve captured movie examples of Vuzix “Dragon” demo, these movies are stereoscopic, so you could take a look how it actually looks like in HMD on your stereo-setup (please watch in HD).

Also Wrap AR is fully supported by Vuzix VR manager software, which provides stereoscopy and head-tracking for more than hundred of gaming titles. Even if it’s isn’t gaming HMD, nice to have this feature.


Free to use SDK with C++ examples available from Vuzix website to give programmers ability to use any feature of this HMD including head-tracker, stereoscopy, cameras, and also optical marker tracking example based on ALVAR library. SDK is well documented and examples are good to use.


Some of the statements in this review can be fair for other HMD’s from Wrap series (like Wrap 920) its goes to stereoscopic support and head-tracking module. Some other statements can be close too, but I can’t 100% guarantee that, because I’ve used only AR version, which is a little different even externally from other Wrap HMD’s, and also can be a little different in electronic components which used in it.

If you happen to build your own Augmented Reality software, or you make research and development in this field, and you have a budget on Wrap 920 AR, I would say go for it! It’s nice lightweight HMD which provide true stereoscopic video-see-through. It is ready to use hardware solution. I happy I bought it for my projects.

NextGen ScrollGallery thumbnailNextGen ScrollGallery thumbnail
Wrap 920AR besides VR920 modded with Logitech HD C310 webcam into AR glasses.

Flashback To The ’40s

We all know that the 1950’s were the golden age of 3D movies, Hollywood’s attempt to fend off the rapidly growing television audience. Their 3D thrust was short lived, and with a few exceptions, we enjoyed almost 50 years of 2D bliss. This time around 3D is harder to avoid… TV has it too! For those of you who want to preserve your 2D way of life, liberty, and the pursuit of happiness, Amazon now offers 2D Glasses, a simple way to revert passive 3D systems for 2D viewing. And who says the world isn’t flat?

2D Glasses

2D Glasses

Night vision goggles of Red Army!

Suddenly, I found the information that USSR army, just before World War 2 developed electronic head-mounted infra-red night-vision goggles for tank crew! It is not exactly a virtual reality subject, but nevertheless it’s early days of electronic HMD’s in Soviet Union.

In 1993-1940 years infra-red goggles “Ship” and “Dudka” were tested by crews of BT-7 light tanks. “Ship” was developed by national optics institute and Moscow institute of glass. Device included: infra-red periscope goggles, and additional accessories for driving machinery during night.


Ship - Infra Red Night Vision Goggles

Upgraded version “Dudka” had field tests during June 1940, and after in January – February 1941. Device included: infra-red periscope goggles for tank driver, and crew commander, two infra-red beamers (by 1 Kilowatt each, 140 millimeters diameter each), control unit, separate IR signal beamer, cables and accessories for goggles.


Dudka - Another Infra Red Night Vision System From Pre-War USSR

BT-7, light tank

BT-7, light tank

Goggles weight (without helmet-mounting) 750 gram, FOV – 24 degree, seeing distance at night – 50 meters. These devices approved all specifications of Red Army, but because of bulky construction design, usability issues, especially during winter-time, goggle construction needed additional development, which wasn’t made because of World War 2.

Tank Driver Wearing Dudka

Tank Driver Wearing Dudka

Research and development continued after WWII.


Upgraded, early after-war version of IR goggles (IKN-8) for T-34 tank crew

Read the whole story in English or the original Russian.