When Virtual Reality Was Always Virtual

Latest Posts

The Great Bubble

2013 has brought much excitement to the VR world, especially the perception of great breakthroughs in Head Mounted Display products. Can we take a deep breath, then hold up a distant mirror to the cautionary history of VR from 1993-1998. Back then there also was a fever pitch of excitement as companies pitched great breakthroughs, attracted outsize investments from private and public markets, and yet, the best of them crashed and burned, taking their investors, customers, and vendors down with them.

I invite you to review this chronological collection of VR news reports beginning with the fire sale of VPL’s much vaunted patent portfolio. The reports follow the rise and fall of three VR industry giants: Virtuality, IO Systems, and Superscape. I present these as examples, but they’re not alone. Very few VR firms escaped the 90’s. Sadly the VR bubble burst long before the late 90’s tech bubble burst. It wasn’t the economy – stupid!

Does history repeat itself?

View PDF

Vintage VR-4 Head Mounted Display Teardown

Here’s a much more detailed tear down of the Virtual Research VR-4 Head Mounted Display, done by one of the engineers at VR sometime in 1994. He shows us how to remove the back light inverter and the main PCB.

‘Scuse the vintage VHS EP mode recording. I was trying to save on video tape costs; a 6 hour tape cost $1.50!

Digital VR Rehab

For years, therapists would attempt to treat smokers and alcoholics using real-life triggers. Let addicts see a lighter or an empty bottle, or even a photo of something smoking or drinking related, to trigger cravings, then teach them coping strategies. It was limited because patients could tell they were in a lab and their new found coping skills were not always transferrable to the real world. Enter Virtual Reality:

The theory being that the immersive VR world better approximates the real world, allowing better skills transference and allowing researchers to easily AB test different scenarios. A pilot program at Duke run by assistant professor Zach Rosenthal with funding from the National Institute on Drug Abuse and the Department of Defense has already run about 90 people through this VR rehab trial. No formal conclusions have been reached, but preliminary data suggests effectiveness. More reading here.

Awesome VR Optics for 1″ Class Displays At Less Than Ten Dollars!

Professional wide field of view Virtual Reality optics for less than the price of a couple of double lattes! A while back I demonstrated a design for Leep On The Cheap, a proof of concept for wide field of view optics on 3″ to 4″ display panels. Trouble was… there was quite a bit of distortion and chromatic aberration. However, it sparked quite a bit of thought and development in the VR DIY community. They’re the ones doing all the heavy lifting.

32mm Erfle Eyepiece

32mm Erfle Eyepiece

So… it’s time to come back with another optical design for wide field of view VR, but this time the optical qualities are first rate and remarkably inexpensive. Of course there are a new set of trade-offs: field of view is limited to about 65 deg. (not bad, but not totally immersive), and I rely on somewhat smaller display panels, about 1″ to 1.5″ diagonal. This is roughly the same as the: Nvision Datavisor LCD, Visette Pro, and Virtual Research VR4/V6/V8.

This design, and many commercial versions, rely a the unique characteristic of telescope eyepieces: that they can be directly used as HMD optics without modification. Even better, they’re made in fairly large quantity, with a large selection of optical characteristics and quality, and somebody else has already solved the issues of distortion, chromatic aberration, internal reflections, coatings, and aspheric design. Did I mention that they’re inexpensive. The sweet spot are either Erfle or Plossl designs; Erfle offers wider field of view. Even wider fields can be achieved with variations on the Nagler design, but the weight becomes prohibitively high.

It’s easier to give the tour by video… viddy this my droogies:

Lens sources:

30mm fl Erfle from Surplus Shed – $9.50

32mm fl Erfle from Surplus Shed – $12.50

I’ll leave you hanging re: the LCD panel. The one in the video shown above was torn from a Virtual Research V6; low res, old school.

More info on eyepieces:


Common Telescope Eyepiece Designs


Dump Those Silly Colored 3D Glasses

They say you’re not a true 3D enthusiast until you’ve got a shelf full of red/cyan and green/magenta anaglyph 3D glasses. I’m ready to dump mine in the waste bin, but there’s this little problem; two more shelves of anaglyph DVD, BluRay and VHS movies collected over the years. Soon the studios will start to release the latest blockbusters in full color BluRay 3D, but somehow I doubt they’ll find the time or budget to convert “Comin’ At Ya” or “The Stewardesses” from anaglyph to full color 3D. Who knows, maybe the full color film prints are lost forever.

For a couple of years there were two online purveyors of converted movies; they both did a creditable job, but recently dropped out of sight. My suspicion is that the films’ copyright holders got wise and shut them down for selling unlicensed copies. I doubt the sin had anything to do with 3D, just that they were selling unauthorized copies of the studios’ movies.

However, for those of us who purchase legitimate anaglyph 3D movies from Amazon or our local video store, the courts have strongly affirmed that we can watch them any way we choose, in private, whether we watch standing on our head, projected on a fishbowl, reflected off the water in our toilet, or even in full color 3D! In other words, if we have a gadget at home that translates anaglyph movies into full color and we use it solely to privately watch the legitimate 3D movies we own, we’re well within our legal rights.

And here’s how!

Anaglyph movies come in a number of flavors. This post deals with red/cyan and green/magenta anaglyphs which cover the majority of releases available. The goal is to create full color left and right video streams from these anaglyph releases. To keep things simple, this post describes a method to output either side-by-side or top-bottom formatted full color stereoscopic video in a single file. The tools used are open source: AVISynth and VirtualDubMod, as is the code I’ve contributed (refer to the license that accompanies the code.) To keep things simple, there’s download links for everything needed at the end of this post.

This post is not a cookbook for the uninitiated. If you are comfortable with AVISynth and VirtualDubMod, or are willing to learn these two programs, you’ll be fine. You don’t need to be a coder, you just need to be experienced in using these programs. Similarly, you need to have a basic knowledge of 3D formats and some experience critically viewing 3D. If you’re a 3D enthusiast or a professional, you’re already there.

Prepare your computer

First thing is to get your system ready. Sadly there’s no AVISynth for ‘nix or Mac, so we’re talking PC with XP, Vista, or Win7. The conversion process should run on any PC capable of running these OS’s, but it won’t be much fun without a Dual Core and 2gb+ of memory. I’ve done my testing on an I5-750 with 4gb. Your mileage may vary.

At a minimum, install AVISynth, VirtualDubMod, and the K-Lite Codec Pack. Since AVISynth only ingests AVI, WMV, MPG, and MKV files, you’ll almost certainly want to install software that converts your DVDs and BluRays to one of these formats. For DVD, I recommend VOB2MPG (and if you need to access encrypted DVDs, add DVD Decryptor.) For BluRay, I recommend MakeMKV.

The MOST IMPORTANT thing in getting your DVDs and BluRays into a computer format is that the ripping process should not re-encode (i.e. decompress and then recompress) your video. Anaglyph colors are very delicate and each compression generation makes your 3D conversion ever so much more difficult. VOB2MPG3 and MakeMKV both can do the conversion without recompression.

How the code works

The actual 3D anaglyph-to-full color conversion logic is contained in two AVISynth scripts (.AVS files.) The core logic is in AnaExtract.AVS; unless you’re a coder, you probably won’t fool with this. Just tuck the file away in a safe folder somewhere. The other script is where you set all the parameters for a particular movie you’re converting. I like to name these files: XX-DeAna.avs, where the XX is replaced by the name of the movie. However, you can name each copy of the file anything you like.

The video near the top of this post demonstrates how to use AVISynth, VirtualDubMod, and the parameter file to convert an anaglyph movie. While that should be enough to get you going, the gory technical details begin now. I’ll get back to the parameter .AVS file later on, but first let’s pick apart the actual core processing module: AnaExtract.AVS. You probably will never have to change anything in this section of code; it’s always “included” with your parameter script which is described later in this post. But, if you want to see what goes on under the hood, keep reading this section.

Yes, the code is open source

# AnaExtract.avs
# Tone at VRtifacts.com
# V 0.9 June 27, 2010
# Copyright (C) 2010 Tony Asch
# This program is free software: you can redistribute it and/or modify
# it under the terms of the GNU General Public License as published by
# the Free Software Foundation, either version 3 of the License, or
# (at your option) any later version.
# This program is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# GNU General Public License for more details.
# You should have received a copy of the GNU General Public License
# along with this program. If not, see

The main points are: There’s no warranty, feel free to modify and distribute, but you must include the source in any distribution.

Find the files we want to convert

# Snag our video files
vidL = DirectShowSource(anaglyphName)
vidOrig = DirectShowSource(anaglyphName)
vidsound = DirectShowSource(SoundName, audio=true, video=false)

The parameter AVS file identifies the video file paths and names. Here we open a couple of copies of the anaglyph movie and a copy of the audio only.

Deal with different anaglyph formats – red/cyan or green/magenta

# We need these for later
# Skip if we've already got a Mono Left source
# Red/Cyan?
(inputFormat == "RC") && (isMono != "monoLeft") ? Eval("""
vidL = ConvertToRGB(vidL)
vidL = MergeRGB(vidL.ShowRed("YV12"), vidL.ShowRed("YV12"), vidL.ShowRed("YV12"))
vidL = Greyscale(vidL)
""") : Eval(""" """)
# Green/Magenta?
(inputFormat == "GM") && (isMono != "monoLeft") ? Eval("""
vidL = ConvertToRGB(vidL)
vidL = MergeRGB(vidL.ShowGreen("YV12"), vidL.ShowGreen("YV12"), vidL.ShowGreen("YV12"))
vidL = Greyscale(vidL)
""") : Eval(""" """)

# Skip if we've already got a Mono Right source
# Red/Cyan?
(inputFormat == "RC") && (isMono != "monoRight") ? Eval("""
vidR = ConvertToRGB(vidOrig)
vidR = MergeRGB(vidR.ShowGreen("YV12"), vidR.ShowGreen("YV12"), vidR.ShowGreen("YV12"))
vidR = Greyscale(vidR)
""") : Eval(""" """)
# Green/Magenta?
(inputFormat == "GM") && (isMono != "monoRight") ? Eval("""
vidR = ConvertToRGB(vidOrig)
vidR = MergeRGB(vidR.ShowRed("YV12"), vidR.ShowRed("YV12"), vidR.ShowBlue("YV12"))
vidR = Greyscale(vidR)
""") : Eval(""" """)

Since many 3D DVDs come with both a 2D and a 3D copy of the same movie, it is often the case that the 2D version is a perfect reproduction of either the left or right eye view. If that is the case, we’ve cut our work in half, and are assured of one eye’s view being perfect. The block of code above separates the two anaglyph colors and produces independent left and right video streams, albeit that it’s primarily luminance information that we get once they are transformed to greyscale. If we already have something from the 2D copy, then we don’t need to extract an imperfect copy of that eye’s view from the anaglyph. Cool! In the case of RC, I found the blue channel to be quite noisy and even though cyan is a combination of blue and green, I just grabbed the green and used it as luminance information. If you don’t have a 2D version you will set the parameter isMono = “monoNone”, the parameter vidOrig will be set to “nothing” or some other string, and will not be used by any of the code.

Prepare the full color information

# Prepare color information by resizing the image down and back up,
# creating a blurred version of the source for color restoration
# Note: the decimateHoriz/Vert values are percentages (1-100)
# defining the size of the shrunken version. Small numbers give more blur
# Code has a hack to make sure the shrunk version dimensions are even numbers
vidColor = DirectShowSource(PureColName)
vidColor = BicubicResize(vidColor, Int((width(vidL) * decimateHoriz) / 200.0) * 2, Int((height(vidL) * decimateVert) / 200.0) * 2)
vidColor = BicubicResize(vidColor, width(vidL), height(vidL))

In the prior step we extracted greyscale copies of the left and right video streams. In preparation to re-coloring them, we need to calculate the colors to apply. Since the left and right videos are very similar, perhaps offset horizontally in places, and knowing that the human eye perceives much more detail in luminance than it does in chroma, we can apply color with a very blurred paint brush. Back in the parameter .AVS file we defined the name and path of the best video file to use for extracting color information. This would be the 2D version, if we have it; otherwise it will be the anaglyph version.

In AVISynth, the quickest way to blur is to resize a video to a very small size and then resize it back up to full size. For the convenience of users, the parameter file specifies this independently for the width and height, both in percentage terms (1-100.) The code with ” * 200 / 2″ scales from percentage to fraction and also assures that width and height will be an even number (required by AVISynth.)

Horizontal and Vertical color blurring are specified independently because anaglyph 3D images are offset primarily along the horizontal dimension and we need much more blurring horizontally than vertically. Nonetheless, many movies are shot with imprecise 3D cameras which are not perfectly aligned and may exhibit dissimilar lens distortions between the left and right lens. Vertical color blurring can help to cover these problems. Also, remember that we are not yet blurring the luminance, which is where we perceive image detail.

Correct for any leakage between left and right

# Undoubtedly a little bit of the wrong eye has leaked over during the Anaglyph encoding process
# Subtract out a bit of the wrong eye. Specified in percentage (0-100)
vidL = Overlay(vidL, vidR, mode="subtract", opacity=(leakCorrectL / 100.0))
vidL = Levels(vidL, 0, 1.0, Int(255 * (1.0 - (leakCorrectL / 100.0))), 0, 255, coring=false)

vidR = Overlay(vidR, vidL, mode="subtract", opacity=(leakCorrectR / 100.0))
vidR = Levels(vidR, 0, 1.0, Int(255 * (1.0 - (leakCorrectR / 100.0))), 0, 255, coring=false)

Often there is leakage between the left and right anaglyph images (and thus the left and right luminance images we’ve calculated here), produced by the mastering or encoding. This bit of code corrects for this leakage by subtracting the opposite eye in small amounts (specified in the parameter file.) Since the subtraction is clamped at zero, we then rescale to full range: 0-255 luminance (8 bits.)

Reduce color fringing

# Horizontally blur if needed - reduces fringing from excessive video peaking.
# Skip if = 1.0 (no blur). Otherwise larger value produce more blur
blurRight != 1.0 ? Eval("""
vidR = BicubicResize(vidR, Int(width(vidOrig) / (blurRight * 2.0)) * 2, height(vidOrig))
vidR = BicubicResize(vidR, width(vidOrig), height(vidOrig))
""") : Eval(""" """)
blurLeft != 1.0 ? Eval("""
vidL = BicubicResize(vidL, Int(width(vidOrig) / (blurLeft * 2.0)) * 2, height(vidOrig))
vidL = BicubicResize(vidL, width(vidOrig), height(vidOrig))
""") : Eval(""" """)

In an attempt to make DVDs look sharper, some anaglyph movies have excessive peaking, a video process to enhance detail. This leaves thin white fringes on the edges of color and luminance boundaries. The boundaries of our red/cyan or green/magenta areas are particularly susceptible. While there are doubtless more sophisticated ways to deal with these fringes, in the interest of processing (and coding) speed, an optional horizontal blur is applied to the offending luminance stream(s.)

Convert to YUV Color Space

# YUV color space for chroma operations
vidR = ConvertToYV12(vidR)
vidL = ConvertToYV12(vidL)
vidColor = ConvertToYV12(vidColor)

Although we’ve tried to keep our processing in RGB space as much as possible, AVISynth likes to paint in color on YV12 color space, and so we convert.

Restore the color

# Use our blurred color video to restore colors to greyscale Right and Left videos
# Skip if we've already got a Mono Right source
isMono != "monoRight" ? Eval("""
vidR = mergechroma(vidR, vidColor)
""") : (""" """)
# Skip if we've already got a Mono Left source
isMono != "monoLeft" ? Eval("""
vidL = mergeChroma(vidL, vidColor)
""") : (""" """)

At long last we apply color to the two luminance video streams, except in the case where we already have a perfectly serviceable stream from the 2D version off the DVD or BluRay.

Do some final color correction

# Final brightness and contrast tweak
vidR = Tweak(vidR, bright=tweakBrightR, cont=tweakContR, sat=tweakSatR, coring=false)
vidL = Tweak(vidL, bright=tweakBrightL, cont=tweakContL, sat=tweakSatL, coring=false)

Chances are good that we may want to do a bit of brightness, contrast, and color saturation correction on each of the two video streams.

Process an existing 2D version

# Maybe we already have a 2D version for one of the eyes
isMono == "monoRight" ? Eval("""
vidR = ConvertToRGB(DirectShowSource(monoName))
""") : Eval("""
vidR = ConvertToRGB(vidR)

isMono == "monoLeft" ? Eval("""
vidL = ConvertToRGB(DirectShowSource(monoName))
""") : Eval("""
vidL = ConvertToRGB(vidL)

If we had that handy 2D version, we avoided processing the left or right stream that corresponds to that 2D view. Now it’s time to insert our 2D video in its proper place.

Swap left and right video if needed

# Swap if needed
swapAnaglyph == "Yes" ? Eval("""
vidTemp = vidL
vidL = vidR
vidR = vidTemp
""") : (""" """)

The user might want us to swap the left and right video streams. Just doing this to be polite!

Build the side-by-side or top-bottom output

# Build the Side by Side (SBS) or Top Bottom (TB) combination of Left and Right video
outputFormat == "SBS_Left_First" ? Eval("""
StackHorizontal(vidL, vidR)
""") : Eval(""" """)

outputFormat == "SBS_Right_First" ? Eval("""
StackHorizontal(vidR, vidL)
""") : Eval(""" """)

outputFormat == "TB_Left_Top" ? Eval("""
StackVertical(vidL, vidR)
""") : Eval(""" """)

outputFormat == "TB_Right_Top" ? Eval("""
StackVertical(vidR, vidL)
""") : Eval(""" """)

Assemble both video streams into a single video, either side-by-side or top-bottom, with the choice of whether left or right comes first (left/top.)

Resize the output if needed

# Optionally resize the output video
outputResize == "Yes" ? Eval("""
BicubicResize(outputWidth, outputHeight)
""") : Eval(""" """)

The user might want to resize the final video smaller or larger. Here’s where it happens. The specified sizes are for the final combined single video stream.

Dub in the audio track

# Dub in the sound

And add the sound back in. If the proper codecs are installed, AVISynth should handle PCM, MP3, AC3, 5.1, 7.1, etc…

The Parameter File

The Parameter file is where you tell AVISynth where your Anaglyph movie is located in your file system, and where you set the conversion parameters for a specific movie. You’ll want to have a copy of this file for every movie you want to convert because you’ll be editing parameters that control the conversion of a single move. This is the file you load with File->Open Video in VirtualDubMod. It tells AVISynth how you want the movie converted. At the end of the parameter file, the core processing code is “included” and invoked. Although we walk through each parameter in the video at the top of this post, let’s have another run through.

Name the files

# Setup our input files
anaglyphName = "F:/3D Conversions/MovieFolder/YourMovie-GM.avi" # Anaglyph video
PureColName = "F:/3D Conversions/MovieFolder/YourMovie-2D.avi" # Video with color info (either Anaglyph or 2D)
monoName = "F:/3D Conversions/MovieFolder/YourMovie-2D.avi" # Possible 2D video for one eye, if not set to "nothing"
SoundName = "F:/3D Conversions/MovieFolder/YourMovie-2D.avi" # Video with the sound track we want

# Maybe we already have one eye's version in 2D already,
# i.e. the DVD or BR has both 2D and 3D versions
# Set to: monoRight or monoLeft or monoNone
isMono = "monoLeft"

This section tells the conversion process where all the input files are located. Most important is the anaglyph file. Second is the video with our color information. If we’ve got a 2D copy, that’s the best source for color; otherwise you can get reasonable color by using the anaglyph video. If we have a 2D video and we want to use it for either the left or right eye view, the third line is where to specify it. If not, you can put anything in this parameter (string.) The fourth file tells the conversion where to get the audio tracks. Typically this will be either the 2D version or the anaglyph version, but you could use any video file with a sound track.

WARNING: all of these files must be synced to frame accuracy. Sometimes the 2D version and the 3D version are not exactly the same, often being different in the opening credits. You should pre-process the files to be frame accurate before running this conversion!

The next parameter: isMono, tells our conversion whether we already have a 2D version that corresponds to one of the eye’s views. You can set this to monoLeft or monoRight to tell the conversion that you have a 2D copy that should be used as the left or right output. If you don’t have a 2D version or don’t want to use it, set isMono = monoNone.

Format the output file

# Swap eyes: either Yes or No
# Note: it is industry standard to put Red on the left eye for RC glasses
# and Green on the left eye for GM glasses
# It would be unusual to set this parameter to Yes
# since the un-swapped arrangement is either Red=Left or Green=Left
swapAnaglyph = "No"

# Output formatting:
# Choices are:
# SBS_Left_First, SBS_Right_First, TB_Left_Top, TB_Right_Top
# Meaning Side-by-Side (SBS) or Top-Bottom (TB)
# And choosing which eye is in which position
# This happens after the optional swap (above)
# and is somewhat redundant, but makes the eye choices clearer.
outputFormat = "SBS_Left_First"

# Resize the output video? Either Yes or No
# If set to No, then the output video is either
# twice the width of the input (for SBS)
# or twice the height of the input (for TB)
outputResize = "No"

# If we are resizing the output, specify the dimensions (Int)
# These dimensions apply to the stacked video size
outputWidth = 500
outputHeight = 200

This section deals with the output file. Although you name the output file in VirtualDubMod (File->Save As…), the layout of that file is determined here. If you want to swap the left and right videos in the output, set swapAnaglyph = “Yes”; otherwise it should be “No”.

Next you you need to tell the conversion whether the two output videos should be arranged side-by-side or stacked vertically. In addition you’ll indicate whether you want the left video first (i.e. on the left of a side-by-side, or top of a vertical stack), or the right video first. The swapAnaglyph parameter reverses the meaning of this order.

If outputResize = “No”, then the width and height of the output video is taken from the input videos (which all must be the same size!) For side-by-side format, the output will be twice as wide as the input, but exactly the same height. For a vertical stack, the output will be exactly as wide as the input, but twice as tall.

If outputResize = “Yes”, then you can specify the output width and height.

Be careful with very large dimensions, especially the width, as some codecs can’t handle really big sizes (>2k.)

Prepare the color information

# How much to blur the color information (Int or Float)
# This is done by shrinking the color video down in size
# and then resizing it back up to full resolution
# producing a blurred full resolution image
# The two decimate numbers are expressed as percentages
# i.e. a percentage of the full resolution to calculate
# the shrunk size. 100 means no shrink, 10 means 1/10 the
# resolution of the original, etc.
# Anaglyphs are only offset horizontally, so the color blur
# should be strong horizontally, but weak vertically
# For films where the cameras were misaligned vertically
# you will need to make the vertical blur greater.
decimateHoriz = 5.0 # Horizontal shrinkage
decimateVert = 20.0 # Vertical shrinkage - can usually be much bigger than decimateHoriz

As part of the conversion, the videos will be re-colored. This color is extracted from the file you assigned to PureColName. Because the anaglyph will have left and right videos shifted horizontally depending on the depth and strength of the 3D, we can’t be exactly sure where to re-color. Instead we blur the colors so that they approximate the proper location, relying on human’s perceptual weakness for color detail. The blur is achieved by shrinking down the color video and then expanding it back up to full size. There are separate parameters for horizontal and vertical shrink because 3D anaglyphs are displaced mostly in the horizontal direction and therefore more horizontal blurring is needed. The vertical blurring can help to compensate for vertical misalignment of the cameras, lens distortions and other vertical artifacts.

Deal with left-right leakage and color fringing

# In case one anaglyph eye has leaked into the other
# We can try to remove that leakage by subtraction
# Expressed as percentage (int or float) (-100 to 100) (0 means none)
leakCorrectR = 10 # Leakage of left into the right eye
leakCorrectL = 0 # Leakage of right into the left eye

# Option to horizontally blur the left and right videos,
# just before the color is restored (before optional LR swap)
# Helps remove some of the fringing that appears in poor DVD encodes
# Set to exactly 1.0 for no processing (faster!!),
# > 1.0 blurs... try 1.5 to 4.0
blurLeft = 1.0
blurRight = 2.0

Here we attempt to correct for leakage of left into right and visa-versa. If you see ghosts of one side appearing in the other side, try subtly adjusting these parameters. Of course, with a great BluRay anaglyph, no correction may be needed at all.

Some anaglyph videos, especially those from DVD or VHS sources will show some fringing around the anaglyph color boundaries after separation by this conversion process. The fringes can be minimized by a slight horizontal blurring.

A final round of color correction

# Final brightness and contrast adjustments
tweakBrightL = 0 # Left brightness, integer to add to each pixel (pixels are 0-255)
tweakContL = 1.0 # Left contrast adjustment (1.0 means no contrast adjustment)
tweakSatL = 1.0 # Left saturation adjustment (1.0 means no saturation adjustment)

tweakBrightR = -50 # Right brightness, integer to add to each pixel (pixels are 0-255)
tweakContR = 1.35 # Right contrast adjustment
tweakSatR = 1.3 # Right saturation adjustment

Often the anaglyph conversion process will leave the videos looking a bit washed out, or dark, or desaturated. Here is your opportunity to do some basic color correction on either the left or right video (or both!)

Load the actual conversion code

# Common code to do the conversion
# Make sure this file path points to
# the file on your system.
import("F:\3D Conversions\AnaExtract.avs")

And finally at the end of the parameter file, the actual conversion code is invoked. Just make sure the path to AnaExtract.AVS is correct for your system configuration. You can put it where ever you want, but you’ll need to change the final line of the parameter file to point to the proper location in your file system.


Anaglyph Conversion AVS Scripts



K-Lite Codec Pack (not required, but makes things much easier!)

AC-3 Sound Codec (not required, but the AC-3 codec in K-Lite doesn’t work with AVISynth)

MakeMkv – Rips BluRay to MKV with no re-encoding (currently free)



Stuff Yet to be Done

  • Convert red/blue anaglyph (trivial)
  • Convert ColorCode anaglyph (brutal!)
  • Better defringing
  • Output dual stream WMV
  • Output 2 separate files
  • Better leakage correction
  • Interlaced output
  • Optimize for speed
  • Port to After Effects, Premiere, Vegas, etc…
  • Linux version

It’s open source. You can help too!


Much of this code was inspired by prior AVISynth anaglyph converters from Olivier Amato and The Lone Wandering Soul. You probably will see some similarities in a few areas. That’s not coincidental. My thanks to both of them.

The Cart Before The Horse, Once Again – Project Glass

Google has been tearing through the bandwidth over at the Patent Office in defense of Project Glass, April’s much touted announcement of Google’s entry into the world of augmented reality and head mounted displays. One especially clever patent covers their bases on the use of glasses nose-bridge as a power switch.

Trouble is: where’s the beef? ReadWriteWeb nicely summarizes just why Google stressed that their promo video was just a “concept”, not anything we should expect in the foreseeable future. A few well aimed snippets from their article:

What a disappointment! Google’s prototype heads-up display glasses do not have the Terminator-style graphics shown in the concept video. They just show a simple readout above the user’s line of sight for now. That’s no fun.

After the video came out, Google execs immediately started showing up at conferences and on talk shows wearing Google glasses. But they were vague about the actual capabilities of these prototypes. When Sebastian Thrun dared to demo the camera while live on the Charlie Rose show, the result was pretty harrowing.

Concept videos cross the line when the company can’t deliver the goods. That’s why it’s risky to make them. As writer John Gruber is fond of pointing out, that’s why Apple stopped making such videos. Apple learned its lesson. Now it ships the devices of the future before it ever shows them off, leaving its competitors looking like they’re trying too hard.

Anyway, read the article and decide for yourself…

You may also enjoy this “concept” video:




Sega VR – Mighty Barfin’ Power Rangers (we are the 40 percent)

Sega (all hail Sonic!): 1991 brought the announcement of Sega VR, a $200 headset for the Genesis console, a prototype finally shown at summer CES 1993, and consigned to the trash heap of VR in 1994, before any units shipped. Sega claimed that the helmet experience was just too realistic for young children to handle, but the real scoop from researchers showed that 40% of users suffered from cybersickness and headaches. It’s fair to say that Sega undoubtedly anticipated a sea of lawsuits; as one pundit in the industry put it: “It will be like the Pinto’s exploding gas tank.”

Perfectly capturing the annoying VR hype of the era is Alan Hunter’s (MTV) summer 1993 CES intro of Sega VR:

Money quote from a teen featured in the promo: “I thought I was going to have to wait till I was old… like 30, to get VR at home!” It’s now 2012, he’s closing in on 40, and still waiting.

Much more info can be found in Ken Horowitz’s 1994 review. Four games were produced especially for Sega VR, never to be released.

Here’s some sense of the much feared “realism” which provoked Sega to pull the plug on production:

Much to Sega’s credit, their VR fail was at least an original marketing effort, whereas later in the 1990’s, Nintendo’s Virtual Boy and Atari’s (Virtuality designed) Jaguar VR crashed and burned in much the same mode (although at far greater expense.)