forked from mdn/content
-
Notifications
You must be signed in to change notification settings - Fork 0
/
Copy pathindex.html
160 lines (130 loc) · 6.54 KB
/
index.html
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
---
title: BaseAudioContext.createBuffer()
slug: Web/API/BaseAudioContext/createBuffer
tags:
- API
- Audio
- AudioContext
- BaseAudioContext
- Buffer
- Media
- Method
- Reference
- Web Audio
- Web Audio API
- createBuffer
browser-compat: api.BaseAudioContext.createBuffer
---
<p>{{ APIRef("Web Audio API") }}</p>
<p>The <code>createBuffer()</code> method of the {{ domxref("BaseAudioContext") }}
Interface is used to create a new, empty {{ domxref("AudioBuffer") }} object, which
can then be populated by data, and played via an {{ domxref("AudioBufferSourceNode")
}}</p>
<p>For more details about audio buffers, check out the {{ domxref("AudioBuffer") }}
reference page.</p>
<div class="note">
<p><strong>Note:</strong> <code>createBuffer()</code> used to be able to take compressed
data and give back decoded samples, but this ability was removed from the specification,
because all the decoding was done on the main thread, so
<code>createBuffer()</code> was blocking other code execution. The asynchronous method
<code>decodeAudioData()</code> does the same thing — takes compressed audio, such as an
MP3 file, and directly gives you back an {{ domxref("AudioBuffer") }} that you can
then play via an {{ domxref("AudioBufferSourceNode") }}. For simple use cases
like playing an MP3, <code>decodeAudioData()</code> is what you should be using.</p>
</div>
<h2 id="Syntax">Syntax</h2>
<pre
class="brush: js">var <em>buffer</em> = <em>baseAudioContext</em>.createBuffer(<em>numOfchannels</em>, <em>length</em>, <em>sampleRate</em>);</pre>
<h3 id="Parameters">Parameters</h3>
<div class="note">
<p><strong>Note:</strong> For an in-depth explanation of how audio buffers work, and
what these parameters mean, read <a
href="/en-US/docs/Web/API/Web_Audio_API/Basic_concepts_behind_Web_Audio_API#audio_buffers.3a_frames.2c_samples_and_channels">Audio
buffers: frames, samples and channels</a> from our Basic concepts guide.</p>
</div>
<dl>
<dt><code>numOfChannels</code></dt>
<dd>An integer representing the number of channels this buffer should have. The default
value is 1, and all user agents must support at least 32 channels.</dd>
<dt><code>length</code></dt>
<dd>An integer representing the size of the buffer in sample-frames (where each
sample-frame is the size of a sample in bytes multiplied by
<code>numOfChannels</code>). To determine the <code>length</code> to use for a
specific number of seconds of audio, use <code>numSeconds * sampleRate</code>.</dd>
<dt><code>sampleRate</code></dt>
<dd>The sample rate of the linear audio data in sample-frames per second. All browsers
must support sample rates in at least the range 8,000 Hz to 96,000 Hz.</dd>
</dl>
<h3 id="Returns">Returns</h3>
<p>An {{domxref("AudioBuffer")}} configured based on the specified options.</p>
<h3 id="Exceptions">Exceptions</h3>
<dl>
<dt><code>NotSupportedError</code></dt>
<dd>One or more of the options are negative or otherwise has an invalid value (such as
<code>numberOfChannels</code> being higher than supported, or a
<code>sampleRate</code> outside the nominal range).</dd>
<dt><code>RangeError</code></dt>
<dd>There isn't enough memory available to allocate the buffer.</dd>
</dl>
<h2 id="Examples">Examples</h2>
<p>First, a couple of simple trivial examples, to help explain how the parameters are
used:</p>
<pre class="brush: js">var audioCtx = new AudioContext();
var buffer = audioCtx.createBuffer(2, 22050, 44100);</pre>
<p>If you use this call, you will get a stereo buffer (two channels), that, when played
back on an AudioContext running at 44100Hz (very common, most normal sound cards run at
this rate), will last for 0.5 seconds: 22050 frames / 44100Hz = 0.5 seconds.</p>
<pre class="brush: js">var audioCtx = new AudioContext();
var buffer = audioCtx.createBuffer(1, 22050, 22050);</pre>
<p>If you use this call, you will get a mono buffer (one channel), that, when played back
on an <code>AudioContext</code> running at 44100Hz, will be automatically *resampled* to
44100Hz (and therefore yield 44100 frames), and last for 1.0 second: 44100 frames /
44100Hz = 1 second.</p>
<div class="note">
<p><strong>Note:</strong> audio resampling is very similar to image resizing: say you've
got a 16 x 16 image, but you want it to fill a 32x32 area: you resize (resample) it.
the result has less quality (it can be blurry or edgy, depending on the resizing
algorithm), but it works, and the resized image takes up less space. Resampled audio
is exactly the same — you save space, but in practice you will be unable to properly
reproduce high frequency content (treble sound).</p>
</div>
<p>Now let's look at a more complex <code>createBuffer()</code> example, in which we
create a three second buffer, fill it with white noise, and then play it via an {{
domxref("AudioBufferSourceNode") }}. The comment should clearly explain what is going
on. You can also <a href="https://mdn.github.io/webaudio-examples/audio-buffer/">run the
code live</a>, or <a
href="https://github.com/mdn/webaudio-examples/blob/master/audio-buffer/index.html">view
the source</a>.</p>
<pre class="brush: js;">var audioCtx = new (window.AudioContext || window.webkitAudioContext)();
// Create an empty three-second stereo buffer at the sample rate of the AudioContext
var myArrayBuffer = audioCtx.createBuffer(2, audioCtx.sampleRate * 3, audioCtx.sampleRate);
// Fill the buffer with white noise;
// just random values between -1.0 and 1.0
for (var channel = 0; channel < myArrayBuffer.numberOfChannels; channel++) {
// This gives us the actual ArrayBuffer that contains the data
var nowBuffering = myArrayBuffer.getChannelData(channel);
for (var i = 0; i < myArrayBuffer.length; i++) {
// Math.random() is in [0; 1.0]
// audio needs to be in [-1.0; 1.0]
nowBuffering[i] = Math.random() * 2 - 1;
}
}
// Get an AudioBufferSourceNode.
// This is the AudioNode to use when we want to play an AudioBuffer
var source = audioCtx.createBufferSource();
// set the buffer in the AudioBufferSourceNode
source.buffer = myArrayBuffer;
// connect the AudioBufferSourceNode to the
// destination so we can hear the sound
source.connect(audioCtx.destination);
// start the source playing
source.start();</pre>
<h2 id="Specifications">Specifications</h2>
{{Specifications}}
<h2 id="Browser_compatibility">Browser compatibility</h2>
<p>{{Compat}}</p>
<h2 id="See_also">See also</h2>
<ul>
<li><a href="/en-US/docs/Web/API/Web_Audio_API/Using_Web_Audio_API">Using the Web Audio API</a>
</li>
</ul>