My name is Pavel Koshevoy. I am a Russian - American, living in Salt Lake City,
right next to the University of Utah.
If you are interested you may take a look at my
has pictures :)
In August of 1999 I completed the Bachelor of Science program in
Computer Science at the U.
In November of 1999 I was hired full time at
Parametric Technology Corporation,
where I worked on Pro/Engineer -- a high end MCAD package. In
particular, I worked on the
Style (ISDX) and
Warp features of
I have earned a Master of Science degree in
Computational Engineering and
Science at the U in August 2005.
I worked at the Scientific Computing and Imaging (SCI)
Institute until May 2007. You can see a summary of my accomplishments at SCI
Currently, I am a Sr. Software Engineer
at Sorenson Media, working
on Sorenson Squeeze
and adjacent projects. I was originally hired as a contractor to
develop an algorithm for detecting matching video sequences. I did
that within a couple of months, however the company priorities had
shifted and the fruits of that labor remain unused.
Since then some of my accomplishments include the audio filter
preview feature in Squeeze, cross platform JNI video capture/playback
libraries for Squish, shared pointer implementation with support for
multiple inheritance, Squeeze project tree model/view implementation
(based on the abominable Qt4 abstract model/view architecture),
WebM/VP8 support, 32/64-bit shared memory mapped file IPC layer,
adaptive streaming settings UI design, etc...
My most recent project is a yet another video player,
Demuxing/decoding/transforming is implemented via the excellent
audio rendering is implemented
video rendering is done with OpenGL (in
a Qt4 GUI),
rich subtitle rendering via libass.
The reader and renderer interfaces are abstract, so other
implementations may be possible using something besides FFmpeg.
My code is liberally licensed, so if you want it you can have it.
I am doing this for fun, and hoping to learn something in the process.
For example, I've researched and implementated the
algorithm. WSOLA is used in Apprentice Video to adjust audio playback
speed while preserving the voice pitch. I've ported this code to C
and contributed it to the ffmpeg project as
In March-April 2010 I created
a simple parser/muxer API
media container format. Yamka is not meant (at least not yet) to be
distributed as a standalone library. You can get it via SVN from
sourceforge. Yamka is liberally licensed and anyone interested is
encouraged to incorporate yamka into their projects any way they wish.
Some of my after-hours effort has been spent on a project named
(after the Bernstein basis functions in Bezier curves).
Here are (somewhat dated)
of this program. It is based on
collection of reused code I've developed over the years, with a front
end implemented via the Qt4. I have been building
framework since late 1999, and have been able to successfully
reuse it in a number of projects at the U, such as this Probabilistic
Raytracer, as well as
In October 2010 I've added fullscreen stereoscopic rendering support for my
Samsung 3D HDTV...
Working with large image mosaics produced
made it necessary for me to create a custom image viewer, iv. I have
since moved this project to sourceforge and renamed it to
It is also based on yathe
While attending the Consumer Electronics Show 1997 I worked out a way
to control a laser beam to trace out images on a canvas. I am certain
I am not the first one to have thought of this
(E&S probably beat
me to it by a couple decades), nevertheless....
I have written a series of simulators trying to evaluate and illustrate
the concept. Essentially, the laser beam is reflected from two
mirrors. The first mirror is responsible for
horizontal refresh. It is implemented as a short cylinder rotating
around its central axis. One of the bases of the cylinder is cut at 45
degrees. The laser light is coincidental with the central axis of the
cylinder, so that when it hits the base of the cylinder, it is changed
in direction at 90 degrees. Next, there is a mirror responsible for
vertical refresh. It is implemented as a long cylinder which is cut in
half along its central axis. The two cylinders are positioned so that
when the laser ray is reflected from the first mirror, it strikes the
second mirror exactly along the central axis of the second
cylinder. This allows for better control of the ray.
You can take a look at the simulator #2 (I don't
remember what happend to the first simulator). This was a simulator
which did not account for the difference in distance light has to
travel when reflected from the vertical refresh mirror with respect to
top/bottom of the screen and middle of the screen. The sources to that
simulator can be found here.
The next step was to get rid of this squishing effect you can observe closer
to the vertical center of the picture created by the simulator #2.
#3 improves on this, as well as switching to horizontally continuous
tracing. I have also attempted to create a C++ version of the program which
would generate all possible positions of the ray on canvas, so that simulation
could be replaced by a lookup table based on current angles of each
mirror. You can get the sources to simulator #3 right here.
The next versions of simulator (#4 and #5) relied on the
functions (which I derived from the sketches of my
hypothetical projection TVs) to calculate the coordinate on canvas
where the laser light will hit given the current angles of the
mirrors. Those coordinates would be used to lookup the color value of
that pixel on the canvas. Here is the source code to simulator #5.
During my first semester back in school (Winter-Spring 2004) I have
used the Laser TV as the basis for my final project for the
Mathematical Modeling class. Here
you can find some sample C++ and matlab code, and
As an undergrad at the U of U I had taken a three-course series on
introduction to computer graphics taught
by Peter Shirley.
Here you will find source code to some of the Java applets
I have written during that time (Autumn 1997 - Spring 1998).
The first applet is a
Don't Shoot Yourself In The Leg Game.
This little program is intended to illustrate 2D rotations and
simulation of forces in games. Here is the
directory with the sources.
The next applet is a Moon Lander
Game. It was fairly CPU intensive at the time, since I had to do
low level pixmap grabs on images in order to compensate for the lack
of support for alpha channel in Java 1.1. You can see the source code
The third program is an attempt at
programming in Java. In this program I implemented a simple language
which describes polygons/bodyparts/models/animations
and represents everything with the help of a sequence of
Binary-Space-Partitioning (BSP) trees. The source code is
My todo list
my done list
my random thoughts