【3D技术宅公社】XR数字艺术论坛  XR技术讨论 XR互动电影 定格动画

 找回密码
 立即注册

QQ登录

只需一步,快速开始

调查问卷
论坛即将给大家带来全新的技术服务,面向三围图形学、游戏、动画的全新服务论坛升级为UTF8版本后,中文用户名和用户密码中有中文的都无法登陆,请发邮件到324007255(at)QQ.com联系手动修改密码

3D技术论坛将以计算机图形学为核心,面向教育 推出国内的三维教育引擎该项目在持续研发当中,感谢大家的关注。

查看: 3880|回复: 9

OpenAL Lesson

[复制链接]
发表于 2006-10-24 14:10:56 | 显示全部楼层 |阅读模式

1: Simple Static Sound Introduction

Welcome to the exciting world of OpenAL! OpenAL is still in a stage of growth, and even though there is an ever larger following to the API it still hasn't reached it's full potential. One of the big reasons for this is that there is still not yet hardware acceleration built in for specific cards. However, Creative Labs is a major contributor to the OpenAL project and also happens to be one of the largest soundcard manufacturers. So there is a promise of hardware accelerated features in the near future. OpenAL's only other major contributor, Loki, has gone the way of the dinosaur. So the future of OpenAL on Linux platforms is uncertain. You can still obtain the Linux binaries on some more obscure websites.

OpenAL has also not been seen in many major commercial products, which may have also hurt it's growth. As far as I know the only pc game to use OpenAL has been Metal Gear 2 (although recently I've discovered that Unreal 2 does as well). The popular modeling program, Blender3D, was also known to use OpenAL for all it's audio playback. Aside from these however the only other OpenAL uses have been in the sdk examples and a few obscure tutorials on the internet.

But lets face it, OpenAL has a lot of potential. There are many other audio libraries that claim to work with the hardware on a lower level (and this may be true), but the designers of OpenAL did several things in it's design which make it a superior API. First of all they emulated the OpenGL API which is one of the best ever designed. The API style is flexible, so different coding methods and hardware implementations will take advantage of this. People who have had a lot of experience with OpenGL will be able to pick up OpenAL quite fast. OpenAL also has the advantage of creating 3D surround sound which a lot of other API's cannot boast. On top of all of that it also has the ability to extend itself into EAX and AC3 flawlessly. To my knowledge no other audio library has that capability.

If you still haven't found a reason here to use OpenAL then here's another. It's just cool. It's a nice looking API and will integrate well into your code. You will be able to do many cool sound effects with it. But before we do that we have to learn the basics.

So let's get coding!

#include <conio.h>
#include <stdlib.h>
#include <al/al.h>
#include <al/alc.h>
#include <al/alu.h>
#include <al/alut.h>

You will notice there are similarities between the OpenAL headers and the OpenGL headers. There is an "al.h", "alu.h", and "alut.h" just like "gl.h", "glu.h", and "glut.h", but there is also an "alc.h". The Alc stands for Audio Library Context and it handles the sound devices across different platforms. It also handles situations where you would want to share a device over several windows. If you want to liken Alc to anything in OpenGL, it's probably most like the wgl and glX extensions. We won't go much into Alc since the Alut library handles it internally for us, and for now we'll just let it be. We will also not use Alu very much since it only provides some math functionality which we will not need for small demo's.

// Buffers hold sound data.
ALuint Buffer;
// Sources are points emitting sound.
ALuint Source;

Those familiar with OpenGL know that it uses "texture objects" (or "texture names") to handle textures used by a program. OpenAL does a similar thing with audio samples. There are essentially 3 kinds of objects in OpenAL. A buffer which stores all the information about how a sound should be played and the sound data itself, and a source which is a point in space that emits a sound. It's important to understand that a source is not itself an audio sample. A source only plays back sound data from a buffer bound to it. The source is also given special properties like position and velocity.

The third object which I have not mentioned yet is the listener. There is only one listener which represents where 'you' are, the user. The listener properties along with the source properties determine how the audio sample will be heard. For example their relative positions will determine the intensity of the sound.

// Position of the source sound.
ALfloat SourcePos[] = { 0.0, 0.0, 0.0 };
// Velocity of the source sound.
ALfloat SourceVel[] = { 0.0, 0.0, 0.0 };
// Position of the listener.
ALfloat ListenerPos[] = { 0.0, 0.0, 0.0 };
// Velocity of the listener.
ALfloat ListenerVel[] = { 0.0, 0.0, 0.0 };
// Orientation of the listener. (first 3 elements are "at", second 3 are "up")
ALfloat ListenerOri[] = { 0.0, 0.0, -1.0, 0.0, 1.0, 0.0 };

In the above code we specify the position and velocity of the source and listener objects. These arrays are vector based Cartesian coordinates. You could easily build a structure or class to do the same thing. In this example I used arrays for simplicity.

ALboolean LoadALData()
{
// Variables to load into.
ALenum format;
ALsizei size;
ALvoid* data;
ALsizei freq;
ALboolean loop;

Here we will create a function that loads all of our sound data from a file. The variables are necessary to store some information that Alut will be giving us.

    // Load wav data into a buffer.
alGenBuffers(1, &Buffer);
if (alGetError() != AL_NO_ERROR)
return AL_FALSE;
alutLoadWAVFile("wavdata/FancyPants.wav", &format, &data, &size, &freq, &loop);
alBufferData(Buffer, format, data, size, freq);
alutUnloadWAV(format, data, size, freq);

The function 'alGenBuffers' will create the buffer objects and store them in the variable we passed it. It's important to do an error check to make sure everything went smoothly. There may be a case in which OpenAL could not generate a buffer object due to a lack of memory. In this case it would set the error bit.

The Alut library is very helpful here. It opens up the file for us and gives us all the information we need to create the buffer. And after we have attached all this data to the buffer it will help use dispose of the data. It all works in a clean and efficient manner.

    // Bind buffer with a source.
alGenSources(1, &Source);
if (alGetError() != AL_NO_ERROR)
return AL_FALSE;
alSourcei (Source, AL_BUFFER, Buffer );
alSourcef (Source, AL_PITCH, 1.0f );
alSourcef (Source, AL_GAIN, 1.0f );
alSourcefv(Source, AL_POSITION, SourcePos);
alSourcefv(Source, AL_VELOCITY, SourceVel);
alSourcei (Source, AL_LOOPING, loop );

We generate a source object in the same manner we generated the buffer object. Then we define the source properties that it will use when it's in playback. The most important of these properties is the buffer it should use. This tells the source which audio sample to playback. In this case we only have one so we bind it. We also tell the source it's position and velocity which we defined earlier.

One more thing on 'alGenBuffers' and 'alGenSources'. In some example code I have seen these functions will return an integer value for the number of buffers/sources created. I suppose this was meant as an error checking feature that was left out in a later version. If you see this done in other code don't use it yourself. If you want to do this check, use 'alGetError' instead (like we have done above).

    // Do another error check and return.
if (alGetError() == AL_NO_ERROR)
return AL_TRUE;
return AL_FALSE;

To end the function we just do one more check to make sure all is well, then we return success.

void SetListenerValues()
{
alListenerfv(AL_POSITION, ListenerPos);
alListenerfv(AL_VELOCITY, ListenerVel);
alListenerfv(AL_ORIENTATION, ListenerOri);
}

We created this function to update the listener properties.

void KillALData()
{
alDeleteBuffers(1, &Buffer);
alDeleteSources(1, &Source);
alutExit();
}

This will be our shutdown procedure. It is necessary to call this to release all the memory and audio devices that our program may be using.

int main(int argc, char *argv[])
{
// Initialize OpenAL and clear the error bit.
alutInit(&argc, argv);
alGetError();

The function 'alutInit' will setup everything that the Alc needs to do for us. Basically Alut creates a single OpenAL context through Alc and sets it to current. On the Windows platform it initializes DirectSound. We also do an initial call to the error function to clear it. Every time we call 'glGetError' it will reset itself to 'AL_NO_ERROR'.

    // Load the wav data.
if (LoadALData() == AL_FALSE)
return -1;
SetListenerValues();
// Setup an exit procedure.
atexit(KillALData);

We will check to see if the wav files loaded correctly. If not we must exit the program. Then we update the listener values, and finally we set our exit procedure.

    ALubyte c = ' ';
while (c != 'q')
{
c = getche();
switch (c)
{
// Pressing 'p' will begin playing the sample.
case 'p': alSourcePlay(Source); break;
// Pressing 's' will stop the sample from playing.
case 's': alSourceStop(Source); break;
// Pressing 'h' will pause (hold) the sample.
case 'h': alSourcePause(Source); break;
};
}
return 0;
}

This is the interesting part of the tutorial. It's a very basic loop that lets us control the playback of the audio sample. Pressing 'p' will replay the sample, pressing 's' will stop the sample, and pressing 'h' will pause the sample. Pressing 'q' will exit the program.

Well there it is. Your first delve into OpenAL. I hope it was made simple enough for you. It may have been a little too simple for the 1337 h4X0r, but we all got to start somewhere. Things will get more advanced as we go along.

Download the Dev-C++ source and project file
Download the Visual C++ 6.0 source and project file - (ported by TheCell)
Download the Java source code - (ported by Athomas Goldberg)
Download the Linux port of this tutorial - (ported by Lee Trager)
Download the MacOS port of this tutorial - (ported by Joshua Schrier)

 楼主| 发表于 2006-11-7 11:35:32 | 显示全部楼层

Introduction

This tutorial is designed to follow directly from OpenAL Lesson 8: OggVorbis Streaming - Using The Source Queue by Jesse Maurais. For this reason, the code base used is almost identical to the one used in that lesson, the only changes being are in regards to how the file is loaded. Everything else is exactly the same. That's one of the real strengths of the OggVorbis libraries, you can load in the .ogg file however you please, and all the other calls are used in exactly the same way. So, on with the show.

Getting The File Into Memory

The OggVorbis file needs to be loaded into memory before we can start parsing them using the vorbis libraries. The method used in the example program is, in no uncertain terms, a true bodge. Normally, the file would have been preloaded into memory at loading time, maybe extracting it from a pak file or the like. For the sake of this tutorial, just ignore how I've gotten it into main memory.

Getting Ready To Read From Memory

The Vorbis libraries don't actually have any support for loading files from memory, they leave it up to you, only asking that you return the right data as and when they need it (This is a pretty cool method, meaning you can read the .ogg file from just about anywhere and anyhow). For this to work, they require you to provide four callback functions (if you don't know what a callback function is, get reading a book on C programming ;)

The callbacks are passed using the ov_callbacks structure and they are:

Fuction Name Function Description
read_func Function used to read data from memory
close_func Function used to close the file in memory
seek_func Function used to seek to a specific part of the file in memory
tell_func Function used to tell how much we have read so far

These functions are expected to work in exactly the same way as the standard C IO functions (fread(...), ftell(...) etc.), and as such have the same inputs and the same return values.

Once we have declared our callback functions, we store them within the ov_callback structure

vorbisCallbacks.read_func  = VorbisRead;
vorbisCallbacks.close_func = VorbisClose;
vorbisCallbacks.seek_func  = VorbisSeek;
vorbisCallbacks.tell_func  = VorbisTell;

Now that we have told the vorbis libs how we are going to perform all the different IO actions on the data, we need to actually pass the load function a pointer to our data. To do this, we need to define our own struct, as this will allow us to perform all the different IO actions we need.

The struct is as follows:

struct SOggFile
{
    char* dataPtr;    // Pointer to the data in memory
    int   dataSize;   // Size of the data
    int   dataRead;   // How much data we have read so far
};
SOggFile oggMemoryFile;

Once we have the structure we need to initialise it. Pretty simple stuff. We save the pointer to the data in dataPtr, the size of the data in dataSize, and set dataRead to 0 (as we have yet to actually read anything in). Now we just need to set the wheels in motion

    ov_open_callbacks(&oggMemoryFile, &oggStream, NULL, 0, vorbisCallbacks)

This function informs the libraries that we want to open an .off file, but instead of using a FILE, we are going to read in the data ourselves. The only thing here that we haven't seen before is the oggStream. This is just a pointer to a OggVorbis_File, which will be set up for us as we read in the file.

Reading In The Data From Memory

Now all that is left to do is actually read the data from memory. This is all done inside the callback functions that we defined earlier. Lets take a look at each on in turn...

VorbisRead

size_t VorbisRead(void *ptr,           // ptr to the data that the vorbis files need
                  size_t byteSize,     // how big a byte is
                  size_t sizeToRead,   // How much we can read
                  void *datasource)    /* this is a pointer to the data we passed
                                          into ov_open_callbacks (our SOggFile struct)*/
{ 
    ... 
}

This function requires you to read in the data from memory, and place it into the ptr variable (which will be of size byteSize*sizeToRead). All we need to do in this case is to memcopy(...) our data from memory into the ptr, making sure that we do not go beyond the bounds of our data in memory.

A return of 0 means that we have reached the end of the file, and we were unable to read anymore data. Otherwise, we must return the amount of data we have read.

VorbisSeek

int VorbisSeek(void *datasource,   // this is a pointer to the data we passed into ov_open_callbacks (our SOggFile struct)
               ogg_int64_t offset, // offset from the point we wish to seek to
               int whence)         // where we want to seek to
{
    ...
}

This function works in the same way as fseek(...). We are given a point from which to seek (SEEK_SET, SEEK_CUR, SEEK_END), and we must set our data pointer accordingly (again making sure we do not pass past the boundary of our data).

A return of -1 means that this file is not seekable (i.e. you cannot move the pointer, and as such, cannot rewind the file). This is fine if we don't want to loop the sample, but if we do, we need to be able to set the pointer back to the beginning of the file. A return of 0 is a successful call.

VorbisClose

int VorbisClose(void *datasource) // this is a pointer to the data we passed into ov_open_callbacks (our SOggFile struct)
{
    ...
}

This function is called when we call ov_close(...) within our code. The main use of this function is to clean up any allocations made during the opening of the file (of which there should be few, if any). You could, if you wanted, clean up the file in memory, but in most cases I assume the memory would be cleaned up elsewhere, and so this function is left empty.

The return clause is irrelevant. It is assumed this function always succeeds.

VorbisTell

long VorbisTell(void *datasource) // this is a pointer to the data we passed into ov_open_callbacks (our SOggFile struct)
{
    ...
}

This is used to inform the libraries how much of the file we have read so far. Pretty simple stuff, just return the amount we have read. A return of -1 indicated an error, but I can't imagine when this might happen!

Conclusion

There we have it. A few simple functions and the file is loaded from memory. The only thing left to do is clear memory of the file we loaded into memory in the first place, but it needs to be there through the duration of the sample (and beyond if we are looping it, or playing it again)

As you've probably gathered, by using the callbacks we can load a file from absolutely anywhere, not just in memory. By giving the developers a means of controlling the how and when the data is read in, allows us to mess around with the sound as we see fit, allowing us to create simple effect without having to change the actual source sample!

Included with this tutorial is a sample program that demonstrates the implementation (see link below). I hope this tutorial is of use to some of you out there, it would be good to know that my bedroom coding is always going to waste;) If you have any queries or problems, I'm sure you will be able to drop me a line, or post in the forums.

Download the Linux port of this tutorial - (ported by Lee Trager)

Download source code for this article
Discuss this article in the forums
 楼主| 发表于 2006-11-7 11:29:34 | 显示全部楼层

Here is how the queue works in a nutshell: There is a 'list' of buffers. When you unqueue a buffer it gets popped off of the front. When you queue a buffer it gets pushed to the back. That's it. Simple enough?

This is 1 of the 2 most important methods in the class. What we do in this bit of code is check if any buffers have already been played. If there is then we start popping each of them off the back of the queue, we refill the buffers with data from the stream, and then we push them back onto the queue so that they can be played. Hopefully the Listener will have no idea that we have done this. It should sound like one long continuous chain of music. The 'stream' function also tells us if the stream is finished playing. This flag is reported back when the function returns.

bool ogg_stream::stream(ALuint buffer)
{
char data[BUFFER_SIZE];
int size = 0;
int section;
int result;

while(size < BUFFER_SIZE)
{
result = ov_read(&oggStream, data + size, BUFFER_SIZE - size, 0, 2, 1, & section);

if(result > 0)
size += result;
else
if(result < 0)
throw oggString(result);
else
break;
}

if(size == 0)
return false;

alBufferData(buffer, format, data, size, vorbisInfo->rate);
check();

return false;
}

This is another important method of the class. This part fills the buffers with data from the Ogg bitstream. It's a little harder to get a grip on because it's not explainable in a top down manner. 'ov_read' does exactly what you may be thinking it does; it reads data from the Ogg bitstream. vorbisfile does all the decoding of the bitstream, so we don't have to worry about that. This function takes our 'oggStream' structure, a data buffer where it can write the decoded audio, and the size of the chunk you want to decode. The last 4 arguments you don't really have to worry about but I will explain anyway. Relatively: the first indicates little endian (0) or big endian (1), the second indicates the data size (in bytes) as 8 bit (1) or 16 bit (2), the third indicates whether the data is unsigned (0) or signed (1), and the last gives the number of the current bitstream.

The return value of 'ov_read' indicates several things. If the value of the result is positive then it represents how much data was read. This is important because 'ov_read' may not be able to read the entire size requested (usually because it's at the end of the file and there's nothing left to read). Use the result of 'ov_read' over 'BUFFER_SIZE' in any case. If the result of 'ov_read' happens to be negative then it indicates that there was an error in the bitstream. The value of the result is an error code in this case. If the result happens to equal zero then there is nothing left in the file to play.

What makes this code complicated is the while loop. This method was designed to be modular and modifiable. You can change 'BUFFER_SIZE' to whatever you would like and it will still work. But this requires us to make sure that we fill the entire size of the buffer with multiple calls to 'ov_read' and make sure that everything aligns properly. The last part of this method is the call to 'alBufferData' which fills the buffer id with the data that we streamed from the Ogg using 'ov_read'. We employ the 'format' and 'vorbisInfo' data that we set up earlier

void ogg_stream::empty()
{
int queued;

alGetSourcei(source, AL_BUFFERS_QUEUED, &queued);

while(queued--)
{
ALuint buffer;

alSourceUnqueueBuffers(source, 1, &buffer);
check();
}
}

This method will will unqueue any buffers that are pending on the source.

void ogg_stream::check()
{
int error = alGetError();

if(error != AL_NO_ERROR)
throw string("OpenAL error was raised.");
}

This saves us some typing for our error checks.

string ogg_stream::errorString(int code)
{
switch(code)
{
case OV_EREAD:
return string("Read from media.");
case OV_ENOTVORBIS:
return string("Not Vorbis data.");
case OV_EVERSION:
return string("Vorbis version mismatch.");
case OV_EBADHEADER:
return string("Invalid Vorbis header.");
case OV_EFAULT:
return string("Internal logic fault (bug or heap/stack corruption.");
default:
return string("Unknown Ogg error.");
}
}

This will 'stringify' an error message so it makes sense when you read it on a console or MessageBox or whatever.

Making Your Own OggVorbis Player

If you're with me so far then you must be pretty serious about getting this to work for you. Don't worry! We are almost done. All that we need do now is use our newly designed class to play an Ogg file. It should be a relatively simple process from here on in. We have done the hardest part. I won't assume that you will be using this in a game loop, but I'll keep it in mind when designing the loop.

int main(int argc, char* argv[])
{
ogg_stream ogg;

alutInit(&argc, argv);

This should be a no-brainer.

    try
{
if(argc < 2)
throw string("oggplayer *.ogg");

ogg.open(argv[1]);

ogg.display();

Since we are using C++ we will also be using the try/catch/throw keywords for exception handling. You have probably noticed that I've been throwing strings throughout the code in this article.

The first thing I have done here is check to make sure that the user has supplied us with a file path. If there is no arguments to the program then we can't really do anything, so we will simply show the user a little message indicating the Ogg extension. Not very informative but a pretty standard way of handling this in a console application. If there was an argument to the program then we can use it to open a file. We'll also display info on the Ogg file for completeness.

        if(!ogg.playback())
throw string("Ogg refused to play.");

while(ogg.update())
{
if(!ogg.playing())
{
if(!ogg.playback())
throw string("Ogg abruptly stopped.");
else
cout << "Ogg stream was interrupted.\n";
}
}

cout << "Program normal termination.";
cin.get();
}

I find a programs main loop is always the most fun part to write. We begin by playing the Ogg. An Ogg may refuse to play if there is not enough data to stream through the initial 2 buffers (in other words the Ogg is too small) or if it simply can not read the file.

The program will continually loop as long as the 'update' method continues to return true, and it will continue to return true as long as it can successfully read and play the audio stream. Within the loop we will make sure that the Ogg is playing. This may seem like it serves the same purpose as 'update', but it will also solve some other issues that have to do with the system. As a simple test I ran this program while also running a lot of other programs at the same time. Eating up as much cpu time as I could to see how oggplayer would react. You may be surprised to find that the interrupt message does get displayed. Streaming can be interrupted by external processes. This does not raise an error however. Keep that in mind.

If nothing else happens then the program will exit normally with a little message to let you know.

    catch(string error)
{
cout << error;
cin.get();
}

Will catch an error string if one was thrown and display some information on why the program had to terminate.

    ogg.release();

alutExit();

return 0;
}

The end of our main is also a no-brainer.

Answers To Questions You May Be Asking

Can I use more than one buffer for the stream?

In short, yes. There can be any number of buffers queued on the source at a time. Doing this may actually give you better results too. As I said earlier, with just 2 buffers in the queue at any time and with the cpu being clocked out (or if the system hangs), the source may actually finish playing before the stream has decoded another chunk. Having 3 or even 4 buffers in the queue will keep you a little further ahead in case you miss the update.

How often should I call ogg.update?

This is going to vary depending on several things. If you want a quick answer I'll tell you to update as often as you can, but that is not really necessary. As long as you update before the source finishes playing to the end of the queue. The biggest factors that are going to affect this are the buffer size and the number of buffers dedicated to the queue. Obviously if you have more data ready to play to begin with less updates will be necessary.

Is it safe to stream more than one Ogg at a time?

It should be fine. I haven't performed any extreme testing but I don't see why not. Generally you will not have that many streams anyway. You may have one to play some background music, and the occasional character dialog for a game, but most sound effects are too short to bother with streaming. Most of your sources will only ever have one buffer attached to them.

So what is with the name?

"Ogg" is the name of Xiph.org's container format for audio, video, and metadata. "Vorbis" is the name of a specific audio compression scheme that's designed to be contained in Ogg. As for the specific meanings of the words... well, that's a little harder to tell. I think they involve some strange relationship to Terry Pratchett novels. Here is a little page that goes into the details.

How come my console displays 'oggplayer *.ogg' when I run the example program?

You have to specify a filename. You can simply drag and drop an Ogg file over the program and it should work fine. I did not supply a sample Ogg to go with the example however. Partially due to the legality of using someone else's music and partially to reduce the file size. If someone wants to donate a piece (and I like it I may add it to the example files. But please do not submit copyrighted material or hip-hop .

Download the Dev-C++ source and project file.
Download the Visual C++ 6.0 source and project file - (ported by TheCell)
Download the Delphi port for this tutorial - (ported by Sascha Willems, updated by eter.Hine%20AT%20familycourt.gov.au" target="_blank" >Peter Hine)
Download the Linux port of this tutorial - (ported by Lee Trager)

 楼主| 发表于 2006-10-24 14:12:17 | 显示全部楼层

OpenAL Lesson 2: Looping and Fadeaway

Hope you found the last tutorial of some use. I know I did. This will be a real quick and easy tutorial. It won't get too much more complicated at this point.

#include <conio.h>
#include <time.h>
#include <stdlib.h>
#include <al/al.h>
#include <al/alc.h>
#include <al/alu.h>
#include <al/alut.c>

// Buffers hold sound data.
ALuint Buffer;

// Sources are points of emitting sound.
ALuint Source;

// Position of the source sound.
ALfloat SourcePos[] = { 0.0, 0.0, 0.0 };

// Velocity of the source sound.
ALfloat SourceVel[] = { 0.0, 0.0, 0.1 };

// Position of the listener.
ALfloat ListenerPos[] = { 0.0, 0.0, 0.0 };

// Velocity of the listener.
ALfloat ListenerVel[] = { 0.0, 0.0, 0.0 };

// Orientation of the listener. (first 3 elements are "at", second 3 are "up")
ALfloat ListenerOri[] = { 0.0, 0.0, -1.0,  0.0, 1.0, 0.0 };

There is only one change in the code since the last tutorial in this fist section. It is that we altered the sources velocity. It's 'z' field is now 0.1.

ALboolean LoadALData()
{
    // Variables to load into.

    ALenum format;
    ALsizei size;
    ALvoid* data;
    ALsizei freq;
    ALboolean loop;

    // Load wav data into a buffer.

    alGenBuffers(1, &Buffer);

    if (alGetError() != AL_NO_ERROR)
        return AL_FALSE;

    alutLoadWAVFile("wavdata/Footsteps.wav", &format, &data, &size, &freq, &loop);
    alBufferData(Buffer, format, data, size, freq);
    alutUnloadWAV(format, data, size, freq);

    // Bind buffer with a source.

    alGenSources(1, &Source);

    if (alGetError() != AL_NO_ERROR)
        return AL_FALSE;

    alSourcei (Source, AL_BUFFER,   Buffer   );
    alSourcef (Source, AL_PITCH,    1.0f     );
    alSourcef (Source, AL_GAIN,     1.0f     );
    alSourcefv(Source, AL_POSITION, SourcePos);
    alSourcefv(Source, AL_VELOCITY, SourceVel);
    alSourcei (Source, AL_LOOPING,  AL_TRUE  );

    // Do an error check and return.

    if (alGetError() != AL_NO_ERROR)
        return AL_FALSE;

    return AL_TRUE;
}

Two changes in this section. First we are loading the file "Footsteps.wav". We are also explicitly setting the sources 'AL_LOOPING' value to 'AL_TRUE'. What this means is that when the source is prompted to play it will continue to play until stopped. It will play over again after the sound clip has ended.

void SetListenerValues()
{
    alListenerfv(AL_POSITION,    ListenerPos);
    alListenerfv(AL_VELOCITY,    ListenerVel);
    alListenerfv(AL_ORIENTATION, ListenerOri);
}

void KillALData()
{
    alDeleteBuffers(1, &Buffer);
    alDeleteSources(1, &Source);
    alutExit();
}

Nothing has changed here.

int main(int argc, char *argv[])
{
    // Initialize OpenAL and clear the error bit.
    alutInit(NULL,0);
    alGetError();

    // Load the wav data.
    if (LoadALData() == AL_FALSE)
        return 0;

    SetListenerValues();

    // Setup an exit procedure.
    atexit(KillALData);

    // Begin the source playing.
    alSourcePlay(Source);

    // Loop
    ALint time = 0;
    ALint elapse = 0;

    while (!kbhit())
    {
        elapse += clock() - time;
        time += elapse;

        if (elapse > 50)
        {
            elapse = 0;

            SourcePos[0] += SourceVel[0];
            SourcePos[1] += SourceVel[1];
            SourcePos[2] += SourceVel[2];

            alSourcefv(Source, AL_POSITION, SourcePos);
        }
    }


    return 0;
}

The only thing that has changed in this code is the loop. Instead of playing and stopping the audio sample it will slowly get quieter as the sources position grows more distant. We do this by slowly incrementing the position by it's velocity over time. The time is sampled by checking the system clock which gives us a tick count. It shouldn't be necessary to change this, but if the audio clip fades too fast you might want to change 50 to some higher number. Pressing any key will end the loop.

 楼主| 发表于 2006-10-24 14:13:14 | 显示全部楼层

OpenAL Lesson 3: Multiple Sources

Hello. It's been a while since my last tutorial. But better late than never I guess. Since I'm sure your all impatient to read the latest tutorial, I'll just jump right into it. What we hope to accomplish with this one is to be able to play more that one audio sample at a time. Very intense games have all kinds of stuff going on usually involving different sound clips. It won't be hard to implement any of this though. Doing multiple sounds is similar to doing just one.

#include <conio.h>
#include <time.h>
#include <stdlib.h>
#include <al/al.h>
#include <al/alc.h>
#include <al/alu.h>
#include <al/alut.c>

// Maximum data buffers we will need.
#define NUM_BUFFERS 3

// Maximum emissions we will need.
#define NUM_SOURCES 3

// These index the buffers and sources.
#define BATTLE      0
#define GUN1        1
#define GUN2        2

// Buffers hold sound data.
ALuint Buffers[NUM_BUFFERS];

// Sources are points of emitting sound.
ALuint Sources[NUM_SOURCES];

// Position of the source sounds.
ALfloat SourcesPos[NUM_SOURCES][3];

// Velocity of the source sounds.
ALfloat SourcesVel[NUM_SOURCES][3];


// Position of the listener.
ALfloat ListenerPos[] = { 0.0, 0.0, 0.0 };

// Velocity of the listener.
ALfloat ListenerVel[] = { 0.0, 0.0, 0.0 };

// Orientation of the listener. (first 3 elements are "at", second 3 are "up")
ALfloat ListenerOri[] = { 0.0, 0.0, -1.0, 0.0, 1.0, 0.0 };

I guess this little piece of source code will be familiar to a lot of you who've read the first two tutorials. The only difference is that we now have 3 different sound effects that we are going to load into the OpenAL sound system.

ALboolean LoadALData()
{
    // Variables to load into.

    ALenum format;
    ALsizei size;
    ALvoid* data;
    ALsizei freq;
    ALboolean loop;

    // Load wav data into buffers.

    alGenBuffers(NUM_BUFFERS, Buffers);

    if (alGetError() != AL_NO_ERROR)
        return AL_FALSE;

    alutLoadWAVFile("wavdata/Battle.wav", &format, &data, &size, &freq, &loop);
    alBufferData(Buffers[BATTLE], format, data, size, freq);
    alutUnloadWAV(format, data, size, freq);

    alutLoadWAVFile("wavdata/Gun1.wav", &format, &data, &size, &freq, &loop);
    alBufferData(Buffers[GUN1], format, data, size, freq);
    alutUnloadWAV(format, data, size, freq);

    alutLoadWAVFile("wavdata/Gun2.wav", &format, &data, &size, &freq, &loop);
    alBufferData(Buffers[GUN2], format, data, size, freq);
    alutUnloadWAV(format, data, size, freq);

    // Bind buffers into audio sources.

    alGenSources(NUM_SOURCES, Sources);

    if (alGetError() != AL_NO_ERROR)
        return AL_FALSE;

    alSourcei (Sources[BATTLE], AL_BUFFER,   Buffers[BATTLE]  );
    alSourcef (Sources[BATTLE], AL_PITCH,    1.0              );
    alSourcef (Sources[BATTLE], AL_GAIN,     1.0              );
    alSourcefv(Sources[BATTLE], AL_POSITION, SourcePos[BATTLE]);
    alSourcefv(Sources[BATTLE], AL_VELOCITY, SourceVel[BATTLE]);
    alSourcei (Sources[BATTLE], AL_LOOPING,  AL_TRUE          );

    alSourcei (Sources[GUN1], AL_BUFFER,   Buffers[GUN1]  );
    alSourcef (Sources[GUN1], AL_PITCH,    1.0            );
    alSourcef (Sources[GUN1], AL_GAIN,     1.0            );
    alSourcefv(Sources[GUN1], AL_POSITION, SourcePos[GUN1]);
    alSourcefv(Sources[GUN1], AL_VELOCITY, SourceVel[GUN1]);
    alSourcei (Sources[GUN1], AL_LOOPING,  AL_FALSE       );

    alSourcei (Sources[GUN2], AL_BUFFER,   Buffers[GUN2]  );
    alSourcef (Sources[GUN2], AL_PITCH,    1.0            );
    alSourcef (Sources[GUN2], AL_GAIN,     1.0            );
    alSourcefv(Sources[GUN2], AL_POSITION, SourcePos[GUN2]);
    alSourcefv(Sources[GUN2], AL_VELOCITY, SourceVel[GUN2]);
    alSourcei (Sources[GUN2], AL_LOOPING,  AL_FALSE       );

    // Do another error check and return.

    if( alGetError() != AL_NO_ERROR)
        return AL_FALSE;

    return AL_TRUE;
}

This code looks quite a bit different at first, but it isn't really. Basically we load the file data into our 3 buffers, then lock the 3 buffers to our 3 sources relatively. The only other difference is that the "Battle.wav" (Source index 0) is looping while the rest are not.

void SetListenerValues()
{
    alListenerfv(AL_POSITION,    ListenerPos);
    alListenerfv(AL_VELOCITY,    ListenerVel);
    alListenerfv(AL_ORIENTATION, ListenerOri);
}

void KillALData()
{
    alDeleteBuffers(NUM_BUFFERS, &Buffers[0]);
    alDeleteSources(NUM_SOURCES, &Sources[0]);
    alutExit();
}

I don't think we changed anything in this code.

int main(int argc, char *argv[])
{
    // Initialize OpenAL and clear the error bit.
    alutInit(NULL, 0);
    alGetError();

    // Load the wav data.
    if (LoadALData() == AL_FALSE)
        return 0;

    SetListenerValues();

    // Setup an exit procedure.
    atexit(KillALData);

    // Begin the battle sample to play.
    alSourcePlay(Sources[BATTLE]);

    // Go through all the sources and check that they are playing.
    // Skip the first source because it is looping anyway (will always be playing).
    ALint play;

    while (!kbhit())
    {
        for (int i = 1; i < NUM_SOURCES; i++)
        {
                alGetSourcei(Sources, AL_SOURCE_STATE, &play);

                if (play != AL_PLAYING)
                {
                    // Pick a random position around the listener to play the source.

                double theta = (double) (rand() % 360) * 3.14 / 180.0;

                SourcePos[0] = -float(cos(theta));
                SourcePos[1] = -float(rand()%2);
                SourcePos[2] = -float(sin(theta));

                alSourcefv(Sources, AL_POSITION, SourcePos );

                alSourcePlay(Sourcev);
            }
        }
    }

    return 0;
}

Here is the interesting part of this tutorial. We go through each of the sources to make sure it's playing. If it is not then we set it to play but we select a new point in 3D space for it to play (just for kicks).

And bang! We are done. As most of you have probably seen, you don't have to do anything special to play more than one source at a time. OpenAL will handle all the mixing features to get the sounds right for their respective distances and velocities. And when it comes right down to it, isn't that the beauty of OpenAL?

You know that was a lot easier than I thought. I don't know why I waited so long to write it. Anyway, if anyone reading wants to see something specific in future tutorials (not necessarily pertaining to OpenAL, I have quite an extensive knowledge base) drop me a line at lightonthewater@hotmail.com I plan to do tutorials on sharing buffers and the Doppler effect in some later tutorial unless there is request for something else. Have fun with the code!

 楼主| 发表于 2006-10-24 14:14:01 | 显示全部楼层

OpenAL Lesson 4: The ALC

Up until now we have been letting Alut do all the real tricky stuff for us. For example handling the audio devices. It's really nice that the Alut library is there to provide this functionality, but any smart coder will want to know exactly what their doing. We may want to, at some point, use the Alc directly. In this tutorial we will expose the Alc layer and take a look at how to handle the devices on our own.

ALCdevice* pDevice;

ALCubyte DeviceName[] = "DirectSound3D";

pDevice = alcOpenDevice(DeviceName);

So what is an Alc device? Try to think of it in terms of a resource. OpenAL grabs a handle to the hardware being used, which must in turn be shared with the entire system. A device can be of a specific implementation as well, as in this case where we are using DirectSound as the audio device. This code grabs a handle to the hardware device and readies it to be used by the application. Eventually we should see more devices made for specific soundcards.

Passing NULL to 'alcOpenDevice' is a perfectly valid argument. It forces the Alc to use a default device.

ALCcontext* pContext;

pContext = alcCreateContext(pDevice, NULL);

alcMakeContextCurrent(pContext);

What is an Alc context? OpenGL coders will recall that there was rendering contexts used by OpenGL that controlled the state management across different windows. An 'HGLRC' as they are called could be created several times to enable multiple rendering windows. And different rendering states for each context could be achieved. An Alc context works on the same principal. First we tell it which device to use (which we have already created), then we make that context current. In theory you could create multiple rendering contexts for different windows, and set the state variables differently and have it work just fine. Although the term "rendering context" usually applies to a visual rendering, this is the term preferred in the sdk docs and should be the term used.

You may notice too that the second parameter in 'alcCreateContext' has been set to NULL. The OpenAL sdk from Creative Labs defines the following variables which are optional flags to that parameter.

  • ALC_FREQUENCY
  • ALC_REFRESH
  • ALC_SYNC

If you were to create multiple contexts you could make them interchangeable by making a call to 'alcMakeContextCurrent'. Sending NULL to 'alcMakeContextCurrent' is also a perfectly valid argument. It will prevent processing of any audio data. Be aware that even if you have multiple rendering contexts, you can only have one current at a time, and when your application needs to use two contexts interchangeably you must be the one to make sure the appropriate context is current. And if you do decide to do this, then there may be times when you want to know exactly which context is current without going through a big check.

ALcontext* pCurContext;
pCurContext = alcGetCurrentContext();

Once you have your context you can also obtain the device in use by that context.

ALdevice* pCurDevice;
pCurDevice = alcGetContextsDevice(pCurContext);

Above we used the context we retrieved to find out which device it was using. There is also one other cool feature that was built into Alc for handling contexts.

alcSuspendContext(pContext);
// Processing has been suspended to pContext.
alcProcessContext(pContext);
// Processing has been re-enabled to pContext.

What we have done above was stop, and then resume processing of audio data to the context. When processing has been suspended, no sound will be generated from data sent through that context. A further note on the rendering context: the OpenAL 1.0 spec does imply, but does not explicitly say, that sources and buffers may be used across contexts. The "lifetime" of a source or buffer during the application, is said to be valid as long as the source and buffer id is valid (i.e. they have not been deleted).

alcMakeContextCurrent(NULL);
alcDestroyContext(pContext);
alcCloseDevice(pDevice);

And that is how we clean up. The current context is defaulted to NULL, the context we created is released, and the handle to the device is given back to the system resources. There is but a few more Alc functions we have not yet covered.

ALenum alcGetError(ALvoid);

ALboolean alcIsExtensionPresent(ALCdevice* device, ALubyte* extName);

ALvoid* alcGetProcAddress(ALCdevice* device, ALubyte* funcName);

ALenum alcGetEnumValue(ALCdevice* device, ALubyte* enumName);

ALubyte* alcGetString(ALCdevice* device, ALenum token);

ALvoid alcGetIntegerv(ALCdevice* device, ALenum token, ALsizei size, ALint* dest);

It may be pretty obvious to you what these do, but lets humour ourselves and have a closer look. First we have 'alcGetError' which is just like 'alGetError' but will return Alc errors. The next three functions are for querying Alc extensions. This was just the creators planning ahead, as there are no Alc extensions either. The last function, 'alcGetInteger', will return the Alc version when passed 'ALC_MAJOR_VERSION' or 'ALC_MINOR_VERSION'.

The function 'alcGetString' is pretty cool. It can take any of the following three parameters to 'token':

  • ALC_DEFAULT_DEVICE_SPECIFIER
  • ALC_DEVICE_SPECIFIER
  • ALC_EXTENSIONS

The first will return the device string which your OpenAL implementation will prefer you to use. In current OpenAL this should be "DirectSound3D", like we used above. The second token will return a list of specifiers, but in current OpenAL will only return "DirectSound" (without the "3D" for some reason). The last will return a list of Alc extensions, of which none exist yet.

Well that's most of Alc for you. I hope it gave you a better understanding of how OpenAL interacts with the operation system. You might try writing your own initialization routines so you can cast off Alut altogether. Either way have fun with it.

See the Java Bindings for OpenAL page for the Java version of this tutorial - adapted by: Athomas Goldberg

 楼主| 发表于 2006-10-24 14:14:47 | 显示全部楼层

OpenAL Lesson 5: Sources Sharing Buffers

At this point in the OpenAL series I will show one method of having your buffers be shared among many sources. This is a very logical and natural step, and it is so easy that some of you may have already done this yourself. If you have you may just skip this tutorial in total and move on. But for those keeners who want to read all of the info I've got to give, you may find this interesting. Plus, we will be implementing the Alc layer directly so that we can use some of that knowledge gained in lesson 4. On top of that we will create a program you might even use!

Well, here we go. I've decided to only go over bits of the code that are significant, since most of the code has been repeated so far in the series. Check out the full source code in the download. One more thing: this tutorial will be using vectors from the Standard Template Library, so make sure you have it installed and have at least a little knowledge of it's operation. I will not cover the STL here because this is an OpenAL tutorial.

// These index the buffers.
#define THUNDER     0
#define WATERDROP   1
#define STREAM      2
#define RAIN        3
#define CHIMES      4
#define OCEAN       5
#define NUM_BUFFERS 6


// Buffers hold sound data.
ALuint Buffers[NUM_BUFFERS];

// A vector list of sources for multiple emissions.
vector<ALuint> Sources;

First I've written out a few macros that we can use to index the buffer array. We will be using several wav files so we need quite a few buffers here. Instead of using an array for storing the sources we will use an STL vector. We chose to do this because it allows us to have a dynamic number of sources. We can just keep adding sources to the scene until OpenAL runs out of them. This is also the first tutorial where we will deal with sources as being a resource that will run out. And yes, they will run out; they are finite.

ALboolean InitOpenAL()
{
    ALCdevice* pDevice;
    ALCcontext* pContext;
    ALCubyte* deviceSpecifier;
    ALCubyte deviceName[] = "DirectSound3D";

    // Get handle to device.
    pDevice = alcOpenDevice(deviceName);

    // Get the device specifier.
    deviceSpecifier = alcGetString(pDevice, ALC_DEVICE_SPECIFIER);

    printf("Using device '%s'.\n", szDeviceSpecifier);

    // Create audio context.
    pContext = alcCreateContext(pDevice, NULL);

    // Set active context.
    alcMakeContextCurrent(pContext);

    // Check for an error.
    if (alcGetError() != ALC_NO_ERROR)
        return AL_FALSE;

    return AL_TRUE;
}

This is some sample code from what we learned in the last tutorial. We get a handle to the device "DirectSound3D", and then obtain a rendering context for our application. This context is set to current and the function will check if everything went smoothly before we return success.

void ExitOpenAL()
{
    ALCcontext* pCurContext;
    ALCdevice* pCurDevice;

    // Get the current context.
    pCurContext = alcGetCurrentContext();

    // Get the device used by that context.
    pCurDevice = alcGetContextsDevice(pCurContext);

    // Reset the current context to NULL.
    alcMakeContextCurrent(NULL);

    // Release the context and the device.
    alcDestroyContext(pCurContext);
    alcCloseDevice(pCurDevice);
}

This will do the opposite we did in the previous code. It retrieves the context and device that our application was using and releases them. It also sets the current context to NULL (the default) which will suspend the processing of any data sent to OpenAL. It is important to reset the current context to NULL or else you will have an invalid context trying to process data. The results of doing this can be unpredictable.

If you are using a multi-context application you may need to have a more advanced way of dealing with initialization and shutdown. I would recommend making all devices and contexts global and closing them individually, rather than retrieving the current context.

ALboolean LoadALData()
{
    // Variables to load into.
    ALenum format;
    ALsizei size;
    ALvoid* data;
    ALsizei freq;
    ALboolean loop;

    // Load wav data into buffers.
    alGenBuffers(NUM_BUFFERS, Buffers);

    if(alGetError() != AL_NO_ERROR)
        return AL_FALSE;

    alutLoadWAVFile("wavdata/thunder.wav", &format, &data, &size, &freq, &loop);
    alBufferData(Buffers[THUNDER], format, data, size, freq);
    alutUnloadWAV(format, data, size, freq);

    alutLoadWAVFile("wavdata/waterdrop.wav", &format, &data, &size, &freq, &loop);
    alBufferData(Buffers[WATERDROP], format, data, size, freq);
    alutUnloadWAV(format, data, size, freq);

    alutLoadWAVFile("wavdata/stream.wav", &format, &data, &size, &freq, &loop);
    alBufferData(Buffers[STREAM], format, data, size, freq);
    alutUnloadWAV(format, data, size, freq);

    alutLoadWAVFile("wavdata/rain.wav", &format, &data, &size, &freq, &loop);
    alBufferData(Buffers[RAIN], format, data, size, freq);
    alutUnloadWAV(format, data, size, freq);

    alutLoadWAVFile("wavdata/ocean.wav", &format, &data, &size, &freq, &loop);
    alBufferData(Buffers[OCEAN], format, data, size, freq);
    alutUnloadWAV(format, data, size, freq);

    alutLoadWAVFile("wavdata/chimes.wav", &format, &data, &size, &freq, &loop);
    alBufferData(Buffers[CHIMES], format, data, size, freq);
    alutUnloadWAV(format, data, size, freq);

    // Do another error check and return.
    if (alGetError() != AL_NO_ERROR)
        return AL_FALSE;

    return AL_TRUE;
}

We've totally removed the source generation from this function. That's because from now on we will be initializing the sources separately.

void AddSource(ALint type)
{
    ALuint source;

    alGenSources(1, &source);

    if (alGetError() != AL_NO_ERROR)
    {
        printf("Error generating audio source.");
        exit(-1);
    }

    alSourcei (source, AL_BUFFER,   Buffers[type]);
    alSourcef (source, AL_PITCH,    1.0          );
    alSourcef (source, AL_GAIN,     1.0          );
    alSourcefv(source, AL_POSITION, SourcePos    );
    alSourcefv(source, AL_VELOCITY, SourceVel    );
    alSourcei (source, AL_LOOPING,  AL_TRUE      );

    alSourcePlay(source);

    Sources.push_back(source);
}

Here's the function that will generate the sources for us. This function will generate a single source for any one of the loaded buffers we generated in the previous source. Given the buffer index 'type', which is one of the macros we created right from the start of this tutorial. We do an error check to make sure we have a source to play (like I said, they are finite). If a source cannot be allocated then the program will exit.

void KillALData()
{
    for (vector<ALuint>::iterator iter = Sources.begin(); iter != Sources.end(); ++iter)
        alDeleteSources(1, iter);
    Sources.clear();
    alDeleteBuffers(NUM_BUFFERS, Buffers);
    ExitOpenAL();
}

This function has been modified a bit to accommodate the STL list. We have to delete each source in the list individually and then clear the list which will effectively destroy it.

ALubyte c = ' ';

    while (c != 'q')
    {
        c = getche();

        switch (c)
        {
            case 'w': AddSource(WATERDROP); break;
            case 't': AddSource(THUNDER);   break;
            case 's': AddSource(STREAM);    break;
            case 'r': AddSource(RAIN);      break;
            case 'o': AddSource(OCEAN);     break;
            case 'c': AddSource(CHIMES);    break;
        };
    }

Here is the programs inner loop taken straight out of our main. Basically it waits for some keyboard input and on certain key hits it will create a new source of a certain type and add it to the audio scene. Essentially what we have created here is something like one of those nature tapes that people listen to for relaxation. Ours is a little better since it allows the user to customize which sounds that they want in the background. Pretty neat eh? I've been listening to mine while I code. It's a Zen experience (I'm listening to it right now).

The program can be expanded for using more wav files, and have the added feature of placing the sources around the scene in arbitrary positions. You could even allow for sources to play with a given frequency rather than have them loop. However this would require GUI routines that go beyond the scope of the tutorial. A full featured "Weathering Engine" would be a nifty program to make though. ;)

 楼主| 发表于 2006-10-24 14:15:19 | 显示全部楼层

OpenAL Lesson 6: Advanced Loading and Error Handles

We've been doing some pretty simple stuff up until now that didn't require us to be very precise in the way we've handled things. The reason for this is that we have been writing code for simplicity in order to learn easier, rather that for robustness. Since we are going to move into some advanced stuff soon we will take some time to learn the proper ways. Most importantly we will learn a more advanced way of handling errors. We will also reorganize the way we have been loading audio data. There wasn't anything wrong with our methods in particular, but we will need a more organized and flexible approach to the process.

We will first consider a few functions that will help us out a lot by the time we have finished.

string GetALErrorString(ALenum err);
/*
 * 1) Identify the error code.
 * 2) Return the error as a string.
 */

string GetALCErrorString(ALenum err);
/*
 * 1) Identify the error code.
 * 2) Return the error as a string.
 */

ALuint LoadALBuffer(string path);
/*
 * 1) Creates a buffer.
 * 2) Loads a wav file into the buffer.
 * 3) Returns the buffer id.
 */

ALuint GetLoadedALBuffer(string path);
/*
 * 1) Checks if file has already been loaded.
 * 2) If it has been loaded already, return the buffer id.
 * 3) If it has not been loaded, load it and return buffer id.
 */

ALuint LoadALSample(string path, bool loop);
/*
 * 1) Creates a source.
 * 2) Calls 'GetLoadedALBuffer' with 'path' and uses the
 *    returned buffer id as it's sources buffer.
 * 3) Returns the source id.
 */

void KillALLoadedData();
/*
 * 1) Releases temporary loading phase data.
 */

bool LoadALData();
/*
 * 1) Loads all buffers and sources for the application.
 */

void KillALData();
/*
 * 1) Releases all buffers.
 * 2) Releases all sources.
 */

vector<string> LoadedFiles; // Holds loaded file paths temporarily.
vector<ALuint> Buffers; // Holds all loaded buffers.
vector<ALuint> Sources; // Holds all validated sources.

Take a close look at the functions and try to understand what we are going to be doing. Basically what we are trying to create is a system in which we no longer have to worry about the relationship between buffers and sources. We can call for the creation of a source from a file and this system will handle the buffer's creation on it's own so we don't duplicate a buffer (have two buffers with the same data). This system will handle the buffers as a limited resource, and will handle that resource efficiently.

string GetALErrorString(ALenum err)
{
    switch(err)
    {
        case AL_NO_ERROR:
            return string("AL_NO_ERROR");
        break;

        case AL_INVALID_NAME:
            return string("AL_INVALID_NAME");
        break;

        case AL_INVALID_ENUM:
            return string("AL_INVALID_ENUM");
        break;

        case AL_INVALID_VALUE:
            return string("AL_INVALID_VALUE");
        break;

        case AL_INVALID_OPERATION:
            return string("AL_INVALID_OPERATION");
        break;

        case AL_OUT_OF_MEMORY:
            return string("AL_OUT_OF_MEMORY");
        break;
    };
}

This function will convert an OpenAL error code to a string so it can be read on the console (or some other device). The OpenAL sdk says that the only exception that needs be looked for in the current version is the 'AL_OUT_OF_MEMORY' error. However, we will account for all the errors so that our code will be up to date with later versions.

string GetALCErrorString(ALenum err)
{
    switch(err)
    {
        case ALC_NO_ERROR:
            return string("AL_NO_ERROR");
        break;

        case ALC_INVALID_DEVICE:
            return string("ALC_INVALID_DEVICE");
        break;

        case ALC_INVALID_CONTEXT:
            return string("ALC_INVALID_CONTEXT");
        break;

        case ALC_INVALID_ENUM:
            return string("ALC_INVALID_ENUM");
        break;

        case ALC_INVALID_VALUE:
            return string("ALC_INVALID_VALUE");
        break;

        case ALC_OUT_OF_MEMORY:
            return string("ALC_OUT_OF_MEMORY");
        break;
    };
}

This function will perform a similar task as the previous one accept this one will interpret Alc errors. OpenAL and Alc share common id's, but not common enough and not dissimilar enough to use the same function for both.

One more note about the function 'alGetError': The OpenAL sdk defines that it only holds a single error at a time (i.e. there is no stacking). When the function is invoked it will return the first error that it has received, and then clear the error bit to 'AL_NO_ERROR'. In other words an error will only be stored in the error bit if no previous error is already stored there.

ALuint LoadALBuffer(string path)
{
    // Variables to store data which defines the buffer.
    ALenum format;
    ALsizei size;
    ALvoid* data;
    ALsizei freq;
    ALboolean loop;

    // Buffer id and error checking variable.
    ALuint buffer;
    ALenum result;

    // Generate a buffer. Check that it was created successfully.
    alGenBuffers(1, &buffer);

    if ((result = alGetError()) != AL_NO_ERROR)
        throw GetALErrorString(result);

    // Read in the wav data from file. Check that it loaded correctly.
    alutLoadWAVFile(szFilePath, &format, &data, &size, &freq, &loop);

    if ((result = alGetError()) != AL_NO_ERROR)
        throw GetALErrorString(result);

    // Send the wav data into the buffer. Check that it was received properly.
    alBufferData(buffer, format, data, size, freq);

    if ((result = alGetError()) != AL_NO_ERROR)
        throw GetALErrorString(result);

    // Get rid of the temporary data.
    alutUnloadWAV(format, data, size, freq);

    if ((result = alGetError()) != AL_NO_ERROR)
        throw GetALErrorString(result);

    // Return the buffer id.
    return buffer;
}

As you can see we do an error check at every possible phase of the load. Any number of things can happen at this point which will cause an error to be thrown. There could be no more system memory either for the buffer creation or the data to be loaded, the wav file itself may not even exist, or an invalid value can be passed to any one of the OpenAL functions which will generate an error.

ALuint GetLoadedALBuffer(string path)
{
    int count = 0; // 'count' will be an index to the buffer list.

    ALuint buffer; // Buffer id for the loaded buffer.


    // Iterate through each file path in the list.
    for(vector<string>::iterator iter = LoadedFiles.begin(); iter != LoadedFiles.end(); ++iter, count++)
    {
        // If this file path matches one we have loaded already, return the buffer id for it.
        if(*iter == path)
            return Buffers[count];
    }

    // If we have made it this far then this file is new and we will create a buffer for it.
    buffer = LoadALBuffer(path);

    // Add this new buffer to the list, and register that this file has been loaded already.
    Buffers.push_back(buffer);

    LoadedFiles.push_back(path);

    return buffer;
}

This will probably be the piece of code most people have trouble with, but it's really not that complex. We are doing a search through a list which contains the file paths of all the wav's we have loaded so far. If one of the paths matches the one we want to load, we will simply return the id to the buffer we loaded it into the first time. This way as long as we consistently load our files through this function, we will never have buffers wasted due to duplication. Every file loaded this way must also be kept track of with it's own list. The 'Buffers' list is parallel to the 'LoadedFiles' list. What I mean by this is that every buffer in the index of 'Buffers', is the same path of the index in 'LoadedFiles' from which that buffer was created.

ALuint LoadALSample(string path, bool loop)
{
    ALuint source;
    ALuint buffer;
    ALenum result;

    // Get the files buffer id (load it if necessary).
    buffer = GetLoadedALBuffer(path);

    // Generate a source.
    alGenSources(1 &source);

    if ((result = alGetError()) != AL_NO_ERROR)
        throw GetALErrorString(result);

    // Setup the source properties.
    alSourcei (source, AL_BUFFER,   buffer   );
    alSourcef (source, AL_PITCH,    1.0      );
    alSourcef (source, AL_GAIN,     1.0      );
    alSourcefv(source, AL_POSITION, SourcePos);
    alSourcefv(source, AL_VELOCITY, SourceVel);
    alSourcei (source, AL_LOOPING,  loop     );

    // Save the source id.
    Sources.push_back(source);

    // Return the source id.
    return source;
}

Now that we have created a system which will handle the buffers for us, we just need an extension to it that will get the sources. In this code we obtain the result of a search for the file, which is the buffer id that the file was loaded into. This buffer is bound to the new source. We save the source id internally and also return it.

void KillALLoadedData()
{
    LoadedFiles.clear();
}

The global vector 'gLoadedFilesv' stored the file path of every wav file that was loaded into a buffer. It doesn't make sense to keep this data lying around after we have loaded all of our data, so we will dispose of it.

// Source id's.

ALuint phaser1;
ALuint phaser2;

void LoadALData()
{
    // Anything for your application here. No worrying about buffers.
    phaser1 = LoadALSample("wavdata/phaser.wav", false);
    phaser2 = LoadALSample("wavdata/phaser.wav", true);

    KillLoadedALData();
}

We have seen the function in previous tutorials. It will represent the part of a program which loads all wav's used by the program. In it we can see why our system is useful. Even though we have made a call to load the same wav file into two distinct sources, the buffer for the file 'phaser.wav' will only be created once, and the sources 'gPhaser1' and 'gPhaser2' will both use that buffer for playback. There is no more concern for handling buffers because the system will handle them automatically.

void KillALData()
{
    // Release all buffer data.
    for (vector<ALuint>::iterator iter = Buffers.begin(); iter != Buffers.end(); ++iter)
        alDeleteBuffers(1, iter);

    // Release all source data.
    for (vector<ALuint>::iterator iter = Sources.begin(); iter != Sources.end(); ++iter)
        alDeleteBuffers(1, iter);

    // Destroy the lists.
    Buffers.clear();
    Sources.clear();
}

All along we have been storing the buffer and source id's into stl vectors. We free all the buffers and sources by going through them and releasing them individually. After which we destroy the lists themselves. All we need to do now is catch the OpenAL errors that we have thrown.

    try
    {
        InitOpenAL();

        LoadALData();
    }
    catch(string err)
    {
        cout << "OpenAL error: " << err.c_str() << endl;
    }

If something has gone wrong during the course of the load we will be notified of it right away. When we catch the error it will be reported on the console. We can use this for debugging or general error reporting.

That's it. A more advanced way of reporting errors, and a more robust way of loading your wav files. We may find we need to do some modifications in the future to allow for more flexibility, but for now we will be using this source for basic file loading in future tutorials. Expect future tutorials to expand on this code.

 楼主| 发表于 2006-10-24 14:16:12 | 显示全部楼层

OpenAL Lesson 7: The Doppler Effect

A Look at Real-World Physics

I know this will be boring review for anyone with a course in high school physics, but lets humour ourselves. The Doppler effect can be a very tricky concept for some people, but it is a logical process, and kind of interesting when you get right down to it. To begin understanding the Doppler effect we first must start to understand what a "sound" really is. Basically a sound is your minds interpretation of a compression wave that is traveling through the air. Whenever the air becomes disturbed it starts a wave which compresses the air particles around it. This wave travels outward from it's point of origin. Consider the following diagram.

In this diagram (on the left) the big red "S" stands for the sources position, and the big red "L" stands for (you guessed it), the Listener's position. Both source and Listener are not moving. The source is emitting compression waves outward, which are represented in this diagram by the blue circles. The Listener is experiencing the sound exactly as it is being made in this diagram. The Doppler effect is not actually present in this example since there is no motion; the Doppler effect only describes the warping of sound due to motion.

What you should try to do is picture this diagram animated. When the source emits a wave (the circles) it will look as though it is growing away from it's point of origin, which is the sources position. A good example of a similar effect is the ripples in a pond. When you throw a pebble into a calm body of water it will emit waves which constantly move away from the point of impact. Believe it or not this occurs from the exact same physical properties. But what does this have to do with the Doppler effect? Check out the next diagram (on the right).

Wow, what's going on here? The source is now in motion, indicated by the little red arrow. In fact the source is now moving towards the Listener with an implied velocity. Notice particularly that the waves (circles) are being displaced inside each other. The displacement follows the approximate path of the source which emits them. This is the key to the Doppler effect. Essentially what has happened is that the source has emitted a wave at different points in it's path of travel. The waves it emits do not move with it, but continue on their own path of travel from the point they were emitted.

So how does this effect the perceived sound by the Listener? Well, notice too in the last diagram that the waves (circles) that are between the source and the Listener are kind of compressed together. This will cause the sound waves to run together, which in turn causes the perceived sound seem like it's faster. What we are talking about here is frequency. The distances between the waves effects the frequency of the sound. When the source that emits the sound is in motion, it causes a change in frequency. You may notice too that distance between the waves varies at different points in space. For example, on the opposite side of the moving source (anywhere along the previous path of travel) the distances are actually wider, so the frequency will be lower (the distance and frequency have an inverse relationship). What this implies is that the frequency perceived by the Listener is relative to where the Listener is standing.

The motion of the Listener can also affect the frequency. This one is a little harder to picture though. If the source is still, and the Listener is moving toward the source, then the perceived frequency by the Listener will be warped in the same exact manner that we described for the moving source.

If you still have trouble picturing this, consider the following two diagrams:

These two diagrams will represent the sound in the form of a sine wave. Look at the first one. Think of the peaks as the instance of the wave. The very top point of the wave will be the same as the instance of the blue circle in the previous set of diagrams. The valleys will be like the spaces in between the blue circles. The second diagram represents a compressed wave. When you compare the two you will notice an obvious difference. The second diagram simply has more wave occurrences in the same amount of space. Other ways of saying this are that they occur more often, with a greater regularity, or with a greater frequency.

For anyone who is interested in some added information: The velocity of the waves is the speed of sound. If the velocity of the source is greater than that of the wave, then the source is breaking the sound barrier.

The Physics of OpenAL

Ok, either you have understood my ramblings on the Doppler effect from above, or you have skipped it because you already have full knowledge of the Doppler effect and just want to know how it effects the OpenAL rendering pipeline. I think the best start to his section will be to quote the OpenAL spec directly:

"The Doppler Effect depends on the velocities of Source and Listener relative to the medium, and the propagation speed of sound in that medium." - chapter 3, subsection 7"

We can take this to mean that there are 3 factors which are going to affect the final frequency of the sound heard by the Listener. These factors are the velocity of the source, the velocity of the Listener, and a predefined speed of sound.

When we refer to a "medium", what we mean is the kind of material that both the source and Listener are "in". For example, sounds that are heard from underwater are much different than sounds that are heard in the open air. Air and water are examples of different mediums. The reason that sound is so different between these mediums has to do with the particle density. As we said before, sound is nothing but the motion of particles in the air. In a medium with a much greater particle density the sound will be much different because the particles are in closer contact. When they are in closer contact it allows for the wave to travel much better. As an example of the opposite, think of outer space. In outer space there is an extremely low particle density. In fact there is only a few very light particles (mostly hydrogen) scattered about. This is why no sound can be heard from space.

Ok, lets get back on topic. OpenAL calculates the Doppler effect internally for us, so we need only define a few parameters that will effect the calculation. We would do this in case we don't want a realistic rendering. Rather if want to exaggerate or deemphasize the effect. The calculation goes like this.

shift = DOPPLER_FACTOR * freq * (DOPPLER_VELOCITY - l.velocity) / (DOPPLER_VELOCITY + s.velocity)

Constants are written in all caps to differentiate. The "l" and "s" variables are the Listener and source respectively. "freq" is the initial unaltered frequency of the emitting wave, and "shift" is the altered frequency of the wave. The term "shift" is the proper way to address the altered frequency and will be used from now on. This final shifted frequency will be sampled by OpenAL for all audio streaming that is affected.

We already know that we can define the velocity of both source and Listener by using the 'AL_VELOCITY' field to 'alListenerfv' and 'alSourcefv'. The 'freq' parameter comes straight from the buffer properties when it was loaded from file. To set the constant values the following functions are provided for us.

ALvoid alDopplerFactor(ALfloat factor);
ALvoid alDopplerVelocity(ALfloat velocity);

For 'alDopplerFactor' any non-negative value will suffice. Passing a negative value will raise an error of 'AL_INVALID_VALUE', and the whole command will be ignored. Passing zero is a perfectly valid argument. Doing this will disable the Doppler effect and may in fact help overall performance (but won't be as realistic). The effect of the Doppler factor will directly change the magnitude of the equation. A value of 1.0 will not change the effect at all. Passing anything between 0.0 and 1.0 will minimize the Doppler effect, and anything greater than 1.0 will maximize the effect.

For 'alDopplerVelocity' any non-negative non-zero value will suffice. Passing either a negative or a zero will raise an error of 'AL_INVALID_VALUE', and the whole command will be ignored. The Doppler velocity is essentially the speed of sound. Setting this will be like setting how fast sound can move through the medium. OpenAL has no sense of medium, but setting the velocity will give the effect of a medium. OpenAL also has no sense of units (kilometer, miles, parsecs), so keep that in mind when you set this value so it is consistent with all other notions of units that you have defined.

See the Java Bindings for OpenAL page for the Java version of this tutorial (adapted by: Athomas Goldberg)

 楼主| 发表于 2006-11-7 11:29:05 | 显示全部楼层
OpenAL Lesson 8: OggVorbis Streaming Using The Source Queue

Hello again fellow coders. I'm back after a fairly long hiatus with another tutorial on the OpenAL api. And I think this will be a beefy one. I would first like to thank the community for their support thus far in the series. I want to put out some special thanks to DevMaster.net who is hosting the series on their website. This really got the ball rolling on the series which has now been ported to Visual C++ 6 by TheCell, and to Java by Athomas Goldberg for JOAL (Java Bindings for OpenAL). I have also heard they have been translated to Portuguese for the Brazilian GameDev. I would also like to give special thanks to Jorge Bernal for sending me some sample code which gave me enough of a kick in the pants to get me writing again. That was a big help (even though translating the code from Spanish was a chore .

An Introduction to OggVorbis

Ever heard of Ogg? There's more to it than a funny sounding name. It's the biggest thing to happen for audio compression since mp3's (also typically used for music). Hopefully, one day, it will replace mp3's as the mainstream standard for compressed audio. Is it better than mp3? That is a question that is a little more difficult to answer. It's a pretty strong debate in some crowds. There are various arguments about compression ratio vs. sound quality which can sometimes get really cumbersome to read through. I personally don't have any opinion on which is "better". I feel the evidence in either case is arguable and not worth mentioning. But for me the fact that Ogg is royalty free (which mp3 is not) wins the argument hands down. The mp3 licensing fee is by no means steep for developers with deep pockets, but as an independent working on a project in your spare time and on minimal resources, shelling out a few grand in fees is not an option. Ogg may be the answer to your prayers.

Designing Your OggVorbis Streaming API

Without further ado let's get to some code.

#include <string>
#include <iostream>
using namespace std;

#include <al/al.h>
#include <ogg/ogg.h>
#include <vorbis/codec.h>
#include <vorbis/vorbisenc.h>
#include <vorbis/vorbisfile.h>


#define BUFFER_SIZE (4096 * 8)

This tutorial will be written in pure C++ code, so we will first include some of the C++ standard headers. We of course include the OpenAL api (as always), and we will also include 4 new headers. These new headers are part of a set of libraries written by the designers of OggVorbis. There are 4 in total: 'ogg.dll' (the format and decoder), 'vorbis.dll' (the coding scheme), 'vorbisenc.dll' (tools for encoding), and 'vorbisfile.dll' (tools for streaming and seeking). We won't be using vorbisenc but I've included it in the files for your use. Using these libraries will take care of the hardest 99% of the work (pretty much all of the decoding). There really is no reason not to use them. First of all we would be re-inventing the wheel if we did, and I doubt that we could write something better than the actual designers of the codec. As another plus: these libraries will be updated as the format evolves without any additional work on our part. But the biggest reason to use these libraries is consistency. If all Ogg files are encoded and decoded using these libraries then all Ogg's should be able to play in all Ogg players. As long as we use this standard library set then we can be sure we will support any Ogg file in existence.

We also create the macro 'BUFFER_SIZE' which defines how big a chunk we want to read from the stream on each update. You will find (with a little experimentation) that larger buffers usually produce better sound quality since they don't update as often, and will generally avoid any abrupt pauses or sound distortions. Of course making your buffer too big will also eat up more memory. Making a stream redundant. I beleive 4096 is the minimum buffer size one can have. I don't recommend using one that small. I tried, and it caused many clicks.

So why should we even bother with streaming? Why not load the whole file into a buffer and then play it? Well, that is a good question. The quick answer is that there is too much audio data. Even though the actual Ogg file size is quite small (usually around 1-3 MB) you must remember that is compressed audio data. You cannot play the compressed form of the data. It must be decompressed and formatted into a form OpenAL recognizes before it can be used in a buffer. That is why we stream the file.

class ogg_stream
{
public:

void open(string path); // obtain a handle to the file
void release(); // release the file handle
void display(); // display some info on the Ogg
bool playback(); // play the Ogg stream
bool playing(); // check if the source is playing
bool update(); // update the stream if necessary

protected:

bool stream(ALuint buffer); // reloads a buffer
void empty(); // empties the queue
void check(); // checks OpenAL error state
string errorString(int code); // stringify an error code

This will be the base of our Ogg streaming api. The public methods are everything that one needs to actually get the Ogg to play. Protected methods are more internal procedures (like error checking). I won't go over each function just yet. I believe my comments should give you an idea of what they're for.

    private:

FILE* oggFile; // file handle
OggVorbis_File oggStream; // stream handle
vorbis_info* vorbisInfo; // some formatting data
vorbis_comment* vorbisComment; // user comments

ALuint buffers[2]; // front and back buffers
ALuint source; // audio source
ALenum format; // internal format
};

First thing that I want to point out is that we have 2 buffers dedicated to the stream rather than the 1 we have always used for wav files. This is important. To understand this I want you to think about how double buffering works in OpenGL/DirectX. There is a front buffer that is "on screen" at any given second, while a back buffer is being drawn to. Then they are swapped. The back buffer becomes the front and vice versa. Pretty much the same principle is applied here. There is a buffer being played and one waiting to be played. When the buffer being played has finished the next one starts. While the next buffer is being played, the first one is refilled with a new chunk of data from the stream and is set to play once the one playing is finished. Confused yet? I'll explain this in more detail later on.

I know your looking at 'FILE*' and are thinking why we would be using a vanilla C file handle when we are using C++. Well vorbisfile was designed around C rather than C++ so it's natural that they used the C file system. It is possible, and indeed quite easy, to get vorbisfile to work with fstream. But even though it is easy it's not any simpler than doing it this way.

void ogg_stream:pen(string path)
{
int result;

if(!(oggFile = fopen(path.c_str(), "rb")))
throw string("Could not open Ogg file.");

if((result = ov_open(oggFile, &oggStream, NULL, 0)) < 0)
{
fclose(oggFile);

throw string("Could not open Ogg stream. ") + errorString(result);
}

See what I mean? If we were to use fstream we would have to create several new functions and register them through 'ov_open_callbacks'. This may be more useful for you if you need support for a virtual file system. The function 'ov_open' binds the file handle with the Ogg stream. The stream now 'owns' this file handle so don't go messing around with it yourself.

    vorbisInfo = ov_info(&oggStream, -1);
vorbisComment = ov_comment(&oggStream, -1);

if(vorbisInfo->channels == 1)
format = AL_FORMAT_MONO16;
else
format = AL_FORMAT_STEREO16;

This grabs some information on the file. We extract the OpenAL format enumerator based on how many channels are in the Ogg.

    alGenBuffers(2, buffers);
check();
alGenSources(1, &source);
check();

alSource3f(source, AL_POSITION, 0.0, 0.0, 0.0);
alSource3f(source, AL_VELOCITY, 0.0, 0.0, 0.0);
alSource3f(source, AL_DIRECTION, 0.0, 0.0, 0.0);
alSourcef (source, AL_ROLLOFF_FACTOR, 0.0 );
alSourcei (source, AL_SOURCE_RELATIVE, AL_TRUE );
}

You've seen most of this before. We set a bunch of default values, position, velocity, direction... But what is rolloff factor? This has to do with attenuation. I will cover attenuation in a later article so I won't go too in-depth, but I will explain it basically. Rolloff factor judges the strength of attenuation over distance. By setting it to 0 we will have turned it off. This means that no matter how far away the Listener is to the source of the Ogg they will still hear it. The same idea applies to source relativity.

void ogg_stream::release()
{
alSourceStop(source);
empty();
alDeleteSources(1, &source);
check();
alDeleteBuffers(1, buffers);
check();

ov_clear(&oggStream);
}

We can clean up after ourselves using this. We stop the source, empty out any buffers that are still in the queue, and destroy our objects. 'ov_clear' releases it's hold on the file stream and will close the handle for us as well.

void ogg_stream::display()
{
cout
<< "version " << vorbisInfo->version << "\n"
<< "channels " << vorbisInfo->channels << "\n"
<< "rate (hz) " << vorbisInfo->rate << "\n"
<< "bitrate upper " << vorbisInfo->bitrate_upper << "\n"
<< "bitrate nominal " << vorbisInfo->bitrate_nominal << "\n"
<< "bitrate lower " << vorbisInfo->bitrate_lower << "\n"
<< "bitrate window " << vorbisInfo->bitrate_window << "\n"
<< "\n"
<< "vendor " << vorbisComment->vendor << "\n";

for(int i = 0; i < vorbisComment->comments; i++)
cout << " " << vorbisComment->user_comments << "\n";

cout << endl;
}

We can use this to view additional information on the file.

bool ogg_stream::playback()
{
if(playing())
return true;

if(!stream(buffers[0]))
return false;

if(!stream(buffers[1]))
return false;

alSourceQueueBuffers(source, 2, buffers);
alSourcePlay(source);

return true;
}

This will start playing the Ogg. If the Ogg is already playing then there is no reason to do it again. We must also initialize the buffers with their first data set. We then queue them and tell the source to play them. This is the first time we have used 'alSourceQueueBuffers'. What it does basically is give the source multiple buffers. These buffers will be played sequentially. I will explain more on this along with the source queue momentarily. One thing to make a note of though: if you are using a source for streaming never bind a buffer to it using 'alSourcei'. Always use 'alSourceQueueBuffers' consistently.

bool ogg_stream::playing()
{
ALenum state;

alGetSourcei(source, AL_SOURCE_STATE, &state);

return (state == AL_PLAYING);
}

This simplifies the task of checking the state of the source.

bool ogg_stream::update()
{
int processed;
bool active = true;

alGetSourcei(source, AL_BUFFERS_PROCESSED, &processed);

while(processed--)
{
ALuint buffer;

alSourceUnqueueBuffers(source, 1, &buffer);
check();

active = stream(buffer);

alSourceQueueBuffers(source, 1, &buffer);
check();
}

return active;
}
您需要登录后才可以回帖 登录 | 立即注册

本版积分规则

手机版|小黑屋|3D数字艺术论坛 ( 沪ICP备14023054号 )

GMT+8, 2024-5-22 01:39

Powered by Discuz! X3.4

Copyright © 2001-2020, Tencent Cloud.

快速回复 返回顶部 返回列表