Photo by Jakob Owens on Unsplash

Get Display Media, The Canvas API, and Recording Video in the Browser

Most recently I have had the chance to work on a personal project, with some technologies that I am very excited about such as MediaDevices and Canvas APIs. The goal of this project is that a presenter can use this service to record a screen presentation, a video medallion, and record a voice-over. All these inputs (video and audio) should be turned into a video that the presenter can publish with the platform. So I knew before getting started that there is this MediaDevices API already available in the browser that can capture your video camera and your screen. As an aside, reading video and audio inputs and outputs and mixing those in the browser for video production, is one of the coolest features that browsers ever had.

Okay, so I will use this post to describe some of the challenges that I had to face during the development of the core function of the platform, that are not available just by browsing the MDN web docs, and that otherwise can take developers good hours, days if not weeks some time to solve, and there is no authoritative source of documentation on how to go about them. My hope is that I will save days of research for people that face the same issues, and make a google search, and land on this blog post.

Issue no1: Make a screen recording with a video medallion in the top-right corner.

Solution: If you have multiple video sources, let’s say a screen capture media track for the main video and a camera capture for the medallion video, the only way to put the two together into an output video is to paint the two videos into a Canvas and use the Canvas.captureStream() API to get a video stream that can go into the final output video.

Example code (high level):

/**
* Prompts user to get all necessary streams and displays them on video elements
*
@param mainVideo - HTMLVideoElement
*
@param medallionVideo - HTMLVideoElement
*
@returns {Promise<(*|MediaStream|*)[]>}
*/
const capturePresentation = async (
mainVideo: HTMLVideoElement,
medallionVideo: HTMLVideoElement
) => {
const videoConstraints = { video: true };
const audioConstraints = { audio: true };

// Capture video inputs
const screen = await navigator.mediaDevices.getDisplayMedia(videoConstraints);
const camera = await navigator.mediaDevices.getUserMedia(videoConstraints);

// Display them on video elements
mainVideo.srcObject = screen;
medallionVideo.srcObject = camera;

// Both getDisplayMedia and getUserMedia
// can capture sound however, I found
// it's easier to reason with if the audio is
// captured and stored separately
const audio = await navigator.mediaDevices.getDisplayMedia(audioConstraints);

// return the 3 streams that we will later need to
// combine with a MediaRecorder
return [screen, camera, audio];
};

/**
* Paints the input sources into the putput canvas
*
@param mainVideo
*
@param medallionVideo
*
@param canvasElement
*/
const paintOnCanvas = (
mainVideo: HTMLVideoElement,
medallionVideo: HTMLVideoElement,
canvasElement: HTMLCanvasElement
) => {
const FPS = 30;
const ctx = canvasElement.getContext("2d");
let myTimeout;

const draw = () => {
// clear the canvas before writing to it.
ctx.clearRect(0, 0, canvasElement.width, canvasElement.height);

// to avoid an ugly closure from the set timeout
clearTimeout(myTimeout);

ctx.drawImage(mainVideo, 0, 0);
// some additional math is needed to position the
// medallion into dx, dy, dw, dh
// depending on how you want to position it in the
// geometry of the output video
// I leave that to the latitude of the implementer.

// For a 300x200 video
const [dx, dy, dw, dh] = [0, canvasElement.width - 300, 300, 200]

ctx.drawImage(
medallionVideo,
0,
0,
medallionVideo.videoWidth,
medallionVideo.videoHeight,
dx,
dy,
dw,
dh
);
// Set Timeout is used on purpose vs requestAnimationFrame
// Will explain why in Issue no2 below
myTimeout = setTimeout(draw, 1000 / FPS);
};
// Set Timeout is used on purpose vs requestAnimationFrame
// Will explain why in Issue no2 below
myTimeout = setTimeout(draw, 1000 / FPS);
};

The reasons I have used a Timeout vs fancier requestAnimationFrame, or requestVideoFrameCallback are the following:

  • requestAnimationFrame is way too faster, for example for capturing a screen, so on large processing canvases, > 3000x2000 pixels for example it will have your GPU and CPU heated to the point it consumes all resources heats up your CPU to the points that it starts the CPU/GPU cooling fan. If you use timeouts, might be more gentle on the GPU, it will be fewer paint operations. Additionally, be mindful of the size of the canvas, as you can have sensible tradeoffs and still capture 1080P HD.
  • Timeout gives you more fine-grained control on when to fire a function, rather than requestAnimationFrame. Its downsides are also understood, but for this purpose, it worked better for me.

Issue no2: When I record the screen of an application with getDisplayMedia and the application went fullscreen, and I tried to paint the screen of that application into a Canvas element, the Canvas captures only the screens before and after the full screen.

Solution: Both requestAnimationFrame and requestVideoFrameCallback, have this massive bug, (please someone point me where to open a but with Chrome and other browsers about this) where when an application that is recorded goes into fullscreen and you try to paint that into a Canvas, because you want to compose a video, it will choke, and the callbacks on these methods won’t fire. Not sure if it is because of some weird security policy, but my experience was after spending a few days on this, is that the only way to record the screen of an application when in full-screen is to use Timeout.

This issue is not documented anywhere and unless there is some sort of security policy that I have missed, I feel that I might be doing a favor here to the community by making this issue discoverable. I’ll investigate this via a bug report with Chrome, and link it back here whenever the time allows.

Issue no3: When I try to captureStream from a Canvas that has painted images from a different domain the captureStream will produce a Blob of 0 size, and basically stream capture will fail silently

Solution: I’ve encountered this issue when trying to captureStream from the canvas element that I was painting images from another domain. The drawImage operation worked without a problem, but when I tried to capture the stream from that canvas, the capture stream was producing a blob that was 0 in size. It took me many hours, of hard debugging, to understand that the Canvas element was “Tainted”, I’ve discovered this during the debugging phase when I tried to capture the Blob from the Canvas rather than the stream and there the browser complained about the tainting. Tainted canvases are those that have pixels drawn from sources that are from other origins than the origin of the page that hosts the canvas element.

For this to work out you need to go to your Cloudfront distribution, or whatever CDN you use to distribute media assets and enable CORS. It will work like a charm thereafter.

The most confusing thing was that the paint operation on the Canvas element worked fine, but capturing the stream failed without any error or warning message whatsoever, and it can take hours to realize what is the root cause.

I hope I’ve saved you a few hours of troubleshooting if you landed on this post to find the solution to the same or similar problem, otherwise, I hope that this was educational or at the very least entertaining. My mission as a software engineer who’s been benefiting from open source, and places like Medium and Stack Overflow for a while is to give back, and writing on Medium is my way of giving back.

--

--

Get the Medium app

A button that says 'Download on the App Store', and if clicked it will lead you to the iOS App store
A button that says 'Get it on, Google Play', and if clicked it will lead you to the Google Play store