Aros/Developer/AHIDrivers
对于其他声卡,开发了一个名为 AHI 的系统来支持除 Paula(Amiga(TM))之外的其他声卡。AHI 使用 ahi.device 和额外的驱动程序来支持不同的声卡,这些声卡在 AHI 偏好设置(在 Prefs 抽屉中)中设置。它的编程方式与旧的 Amiga audio.device 类似。更多信息包含在 AHI 开发者文件中,您可以从 AHI 主页 或 Aminet 下载。
单元 0-3 可以被您为它们定义的任意数量的程序共享。音乐单元会独占地阻止它所设置的硬件,因此没有其他程序可以在同一时间通过此硬件播放声音。这就是 <ahi-device>.audio 被发明的原因,它是一个虚拟硬件,它将它的声音数据发送到您设置它的单元。这样,虽然音乐单元下拉选项独占地阻止了 <ahi-device>.audio,但其他程序仍然可以将声音发送到单元 0-3。通常所有程序都使用单元 0。只有极少数程序使用音乐单元。
它以两种模式工作,作为设备驱动程序(高级)或库(低级)。虽然这最初让我困惑,但当我编程了设备模式之后,它是因为这只是简单地允许您发送声音流,而库模式处理与已预加载的跟踪器一起使用的样本。
库(低级)方法使用 AHI 函数,如 AHI_SetSound()、AHI_SetVol() 等等。实际上,这种方法有一个很大的问题:如果您使用这种方法,您将没有任何混合功能 - 您的程序将锁定 ahi.device,并且在您的程序运行期间,所有其他 ahi 程序将无法工作。如文档中所述,低级编码的唯一优点是“低开销和对播放声音的更高级控制”。
在这里,您可以独占地访问音频硬件,并且可以做几乎任何您想做的事情,包括监控。
缺点显而易见,音频硬件对所有其他程序都不可用。大多数驱动程序并不关心这种情况,如果一个程序尝试访问硬件,它通常会破坏一切,您需要重新启动程序或重新分配音频硬件。
从 AHI6 开始,有一种名为“设备模式”的非阻塞 AHI 模式,但它不允许录音。仅限播放,并且定时很差。播放某些内容足够好,但对实时响应来说太糟糕了。
要使用两个样本,您需要使用库模式,您需要打开 ahi.device 并从 AHI 设备结构中提取库基址。
通过这种方式,您只需像使用标准的 Amiga 设备一样使用 ahi.device,并使用 CMD_WRITE 将原始数据发送到 AHI。使用高级 AHI 编码,您可以混合声音,不再存在 ahi.device 的“锁定”等等。例如,对于 MP3 播放或模块播放器,您只需解压缩 CPU 需要的数据,并使用此解压缩的原始数据与 CMD_WRITE 一起使用。
设备接口的编程要容易得多,并且适合系统噪声或 MP3 播放器。它有一个 20 毫秒的固定延迟,这在大多数(非音乐)情况下已经足够了。它是非阻塞硬件的,因此在快速播放音频时它是首选。
它还支持 CMD_READ 消息,但
- 只要您读取,它就会独占地阻止 AHI。
- 有时它会在通过设备接口录制时发出奇怪的咔嗒声
您所做的只是使用 CMD_WRITE 命令,在 AHI IORequest 结构中设置样本信息,对于更多样本,您将需要复制 IORequest 并将其用于其他样本,等等。本质上,该结构作为消息发送到 AHI 守护程序,这是 Exec 设备的标准做法,因此这就是它需要副本的原因。否则,它会尝试将消息链接到列表中,而该列表已经存在于同一个地址,然后崩溃!
我会从设备 API 开始,尤其是因为它非常简单。当您加载/生成样本数据、打开 AHI 设备并分配 IORequest(s) 后,您可以使用 Exec 库函数(DoIO、SendIO、BeginIO...)播放样本。但是,AHI 通道的数量可能有限,因此 IIRC 在这种情况下,优先级较低的音频将会排队并稍后播放。您可以创建自己的混合器例程,它基本上使用双缓冲 IO 请求“流式传输”数据(AHI SDK 中有一个关于双缓冲的示例)。
我真的需要通过 ahi.device 录制吗?如果只是监控功能,您可以使用 ahi 的内部监控功能(它具有最低可能的延迟,并且可能会使用硬件可能性来监控),或者您可以使用 lib 接口读取、操作和复制您的数据到 outbuffer。延迟通常为 20 毫秒,具体取决于驱动程序,应用程序无法控制它。
您也可以使用 datatypes.library 来播放样本,但无法说它在定时方面是否非常精确,但它至少非常易于使用。
CreateMsgPort CreateIORequest OpenDevice (ahi.device) loop { depack some music data fill AHIdatas SendIO((struct IORequest *) AHIdatas); }
然后,当我需要时,我只是为声音执行此操作(它将在第二个通道上播放(这是使其通过 ahi 同时工作的唯一解决方案)
CreateMsgPort CreateIORequest OpenDevice (ahi.device) fill AHIdatas DoIO/sendio
找到默认音频 ID,例如单元 0 或默认单元。然后调用 AHI_AllocAudioA() 并向它传递 ID 或 AHI_DEFALUT_ID,以及带有您需要的最小通道数量的 AHIA_Channels 标签。然后检查它是否分配了通道。如果是,您应该知道它有足够的通道,然后您可以调用 AHI_FreeAudio()。如果不是,这意味着它没有足够的通道,前提是您已传递所有必需的标签。
AHI 设备接口播放的是流,而不是样本。AHI 会根据您在偏好设置中设置的通道数量将尽可能多的流混合在一起。如果您尝试播放的流数量超过可用的通道数量,则额外的流将被静音。
如果您需要同步两个样本(可能是为了立体声),那么您可以发出 CMD_STOP 命令,执行您的 CMD_WRITE 命令,然后发出 CMD_START 命令来开始播放。您需要注意的是,它会影响所有 AHI 应用程序,而不仅仅是您自己的应用程序。
这让我想到另一个问题。您的声音是单声道还是立体声?正如您所读到的,立体声的正确方法是告诉 AHI 将声像居中并提供一个立体声样本。我不知道如果它无法做到这一点它是否会返回错误,它可能会接受写入但会静音一个通道,就像您发现的那样。
关于从不同 AHI 请求发出多个 CMD_WRITE 的另一件事。AHI 会分别处理每个实例,并将声音在同一条轨道上混合在一起。只要硬件支持,高级 API 只能提供声像定位,而不能指定直接轨道,AFAIK。
http://utilitybase.com/forum/index.php?action=vthread&forum=201&topic=1565&page=-1
如果您想通过设备 API 使用一个通道播放多个样本,您必须自己从样本中创建一个流。
AHI API 使用 OpenDevice 来执行 CMD_READ、CMD_WRITE、CMD_START、CMD_STOP。
if (AHImp=CreateMsgPort()) {
if (AHIio=(struct AHIRequest *)CreateIORequest(AHImp,sizeof(struct AHIRequest))) {
AHIio->ahir_Version = 6;
AHIDevice=OpenDevice(AHINAME,0,(struct IORequest *)AHIio,NULL);
}
这将创建一个新的消息端口,创建一些 IORequest 结构,最后打开要写入的 AHI 设备。
// Play buffer
AHIio->ahir_Std.io_Message.mn_Node.ln_Pri = pri;
AHIio->ahir_Std.io_Command = CMD_WRITE;
AHIio->ahir_Std.io_Data = p1;
AHIio->ahir_Std.io_Length = length;
AHIio->ahir_Std.io_Offset = 0;
AHIio->ahir_Frequency = FREQUENCY;
AHIio->ahir_Type = TYPE;
AHIio->ahir_Volume = 0x10000; // Full volume
AHIio->ahir_Position = 0x8000; // Centered
AHIio->ahir_Link = link;
SendIO((struct IORequest *) AHIio);
// fill
AHIios[0]->ahir_Std.io_Message.mn_Node.ln_Pri = 127;
AHIios[0]->ahir_Std.io_Command = CMD_WRITE;
AHIios[0]->ahir_Std.io_Data = raw_data;
AHIios[0]->ahir_Std.io_Length = size_of_buffer;
AHIios[0]->ahir_Std.io_Offset = 0;
AHIios[0]->ahir_Frequency = 48000; // freq
AHIios[0]->ahir_Type = AHIST_S16S;// 16b
AHIios[0]->ahir_Volume = 0x10000; // vol.
AHIios[0]->ahir_Position = 0x8000;
AHIios[0]->ahir_Link = NULL;
// send
SendIO((struct IORequest *) AHIios[0]);
AHI_IORequest 结构类似于音频设备结构。p1 指向实际的原始声音数据,length 是数据缓冲区的大小,Frequency 是回复的频率,例如 8000 赫兹,Type 是音乐数据类型,例如 AHIST_M8S,然后是扬声器的音量和位置。SendIO 将开始播放声音,您可以使用 WaitIO 等待缓冲区播放完毕,然后再开始播放下一块数据。
释放音频
[edit | edit source]- 我使用 play 为 false 调用 AHI_ControlAudio(),以确保没有任何声音正在播放。
- 我使用 AHI_UnloadSound() 卸载声音,以确保声音被卸载。
- 然后我调用 AHI_FreeAudio()。
关闭
[edit | edit source]完成对 AHI 设备的操作后,您需要关闭设备。例如:
- 执行 CloseDevice()
- 然后执行 DeletIORequest()
- 最后执行 DeleteMsgPort()
if (!AHIDevice)
CloseDevice((struct IORequest *)AHIio);
DeleteIORequest((struct IORequest *)AHIio);
DeleteIORequest((struct IORequest *)AHIio2);
DeleteMsgPort(AHImp);
经常更新声音
[edit | edit source]查看 simpleplay 示例。
如果您想定期“更新”您的声音,那么已经提供了一些功能。
在 AllocAudio() 中,您可以使用 AHIA_PlayerFunc 字段提供一个播放器函数。
AHIA_PlayerFunc 如果您要播放乐谱,您应该使用此“中断”源,而不是 VBLANK 或 CIA 定时器,以便在所有音频驱动程序中获得最佳结果。如果您不能使用此方法,则不得使用任何“非实时”模式(请参阅自动文档中的 AHI_GetAudioAttrsA(),AHIDB_Realtime 标记)。
AHIA_PlayerFreq 如果非零,它将启用计时并指定 PlayerFunc 每秒调用多少次。如果 AHIA_PlayerFunc 不为零,则必须指定此值。建议您将频率保持在 100-200 Hz 之间。由于频率是定点数字,因此 AHIA_PlayerFreq 应小于 13107200(即 200 Hz)。
这样就可以编写一种重播器,例如,它可以决定需要停止哪些声音,或例如滑动、音量调高/调低等。
让主循环等待播放器“完成”。
您可以通过消息传递来实现,但也可以使用信号。为了停止播放器,您可以使用一个布尔值(通过按下按钮或您想要的任何方式设置),播放器会检查该布尔值,然后向主循环发送信号,指示其退出。
请查看 AHI 开发者档案中的 PlaySineEverywhere.c 示例。
杂项
[edit | edit source]有许多事情被称为“延迟”。我最关心的是音频(如麦克风)到达输入到从监视器输出的时间差。您可以通过将具有短上升时间的东西(交叉棒声音效果很好)放入一个通道,并将该通道的输出连接到另一个通道的输入来测量此时间差。在两个通道上录制几秒钟。停止录制,放大两个通道的波形,测量它们之间的时间差。这就是输入/输出延迟。
播放样本时的延迟比较棘手,因为它取决于支持 VST 乐器的程序。如果您有一个具有声音的 MIDI 键盘,您可以选择键盘和 VST 库中的类似声音,将样本播放通道的模拟输出连接到一个输入,将合成器输出连接到另一个输入,播放您的声音,将其录制到两条轨道,查看两条轨道之间的时间差。这不是完全准确的,但会为您提供一个大概的测量值。
如果您按照以下步骤进行(在您的代码中)
filebuffer = Open("e.raw",MODE_OLDFILE); if (filebuffer==NULL) printf("nfilebuffer NULL"); else length1 = Read(filebuffer,p1,BUFFERSIZE); filebuffer = Open("a.raw",MODE_OLDFILE); if (filebuffer==NULL) printf("nfilebuffer NULL"); else length2 = Read(filebuffer,p2,BUFFERSIZE); filebuffer = Open("d.raw",MODE_OLDFILE); if (filebuffer==NULL) printf("nfilebuffer NULL"); else length3 = Read(filebuffer,p3,BUFFERSIZE); filebuffer = Open("g.raw",MODE_OLDFILE); if (filebuffer==NULL) printf("nfilebuffer NULL"); else length4 = Read(filebuffer,p4,BUFFERSIZE); filebuffer = Open("b.raw",MODE_OLDFILE); if (filebuffer==NULL) printf("nfilebuffer NULL"); else length5 = Read(filebuffer,p5,BUFFERSIZE); filebuffer = Open("ec.raw",MODE_OLDFILE); if (filebuffer==NULL) printf("nfilebuffer NULL"); else length6 = Read(filebuffer,p6,BUFFERSIZE); Then your variable "filebuffer" (which is a special pointer to the handle of the file) gets overwritten before the handle is closed. red: So i kind of expected something like: filebuffer = Open("b.raw",MODE_OLDFILE); if (filebuffer==NULL) { printf("nfilebuffer NULL") } else { length5 = Read(filebuffer,p5,BUFFERSIZE); if close(filebuffer) { printf("nfile b.raw closed successfully") } else { printf("nfile b.raw did not close properly, but we cannot use the filehandle anymore because it is not valid anymore") } }
您必须卸载/释放每个已使用或未使用但已分配的通道/声音。
例如,以下代码将循环遍历,直到最后一个已分配的通道。
For(chan_no=0;chan_no<num_of_channels;chan_no++) { If(channel[chan_no]) free(channel[chan_no]); }
也许 If(channel[chan_no]!=NULL)
为了确保,您可以在退出之前将每个声音库设为 NULL。
示例
[edit | edit source]另一个 示例。
尽管需要双缓冲。
struct MsgPort *AHIPort = NULL; struct AHIRequest *AHIReq = NULL; BYTE AHIDevice = -1; UBYTE unit = AHI_DEFAULT_UNIT; static int write_ahi_output (char * output_data, int output_size); static void close_ahi_output ( void ); static int open_ahi_output ( void ) { if (AHIPort = CreateMsgPort()) { if (AHIReq = (struct AHIRequest *) CreateIORequest(AHIPort, sizeof(struct AHIRequest))) { AHIReq->ahir_Version = 4; if (!(AHIDevice = OpenDevice(AHINAME, unit, (struct IORequest *) AHIReq, NULL))) { send_output = write_ahi_output; close_output = close_ahi_output; return 0; } DeleteIORequest((struct IORequest *) AHIReq); AHIReq = NULL; } DeleteMsgPort(AHIPort); AHIPort = NULL; } return -1; } static int write_ahi_output (char * output_data, int output_size) { if (!CheckIO((struct IORequest *) AHIReq)) { WaitIO((struct IORequest *) AHIReq); //AbortIO((struct IORequest *) AHIReq); } AHIReq->ahir_Std.io_Command = CMD_WRITE; AHIReq->ahir_Std.io_Flags = 0; AHIReq->ahir_Std.io_Data = output_data; AHIReq->ahir_Std.io_Length = output_size; AHIReq->ahir_Std.io_Offset = 0; AHIReq->ahir_Frequency = rate; AHIReq->ahir_Type = AHIST_S16S; AHIReq->ahir_Volume = 0x10000; AHIReq->ahir_Position = 0x8000; AHIReq->ahir_Link = NULL; SendIO((struct IORequest *) AHIReq); return 0; } static void close_ahi_output ( void ) { if (!CheckIO((struct IORequest *) AHIReq)) { WaitIO((struct IORequest *) AHIReq); AbortIO((struct IORequest *) AHIReq); } if (AHIReq) { CloseDevice((struct IORequest *) AHIReq); AHIDevice = -1; DeleteIORequest((struct IORequest *) AHIReq); AHIReq = NULL; } if (AHIPort) { DeleteMsgPort(AHIPort); AHIPort = NULL; } }
用于声音播放的高级 ahi - 想法是创建多个 i/o 请求,然后当您想要播放声音时,选择一个空闲的请求,然后使用 BeginIO() 对其启动 CMD_WRITE,然后将 i/o 请求标记为正在使用(上述代码中的 ch->busy 字段)。SoundIO() 函数的作用是检查 ahi.device 的回复,以查看是否有任何 i/o 请求已完成,然后将其标记为不再使用。如果没有空闲的 i/o 请求,PlaySnd 函数将简单地使用 AbortIO()/WaitIO() 中断播放时间最长的请求,然后重用该请求。
char *snd_buffer[5]; int sound_file_size[5]; int number; struct Process *sound_player; int sound_player_done = 0; void load_sound(char *name, int number) { FILE *fp_filename; if((fp_filename = fopen(name,"rb")) == NULL) { printf("can't open sound file\n");exit(0);} ; fseek (fp_filename,0,SEEK_END); sound_file_size[number] = ftell(fp_filename); fseek (fp_filename,0,SEEK_SET); snd_buffer[number]=(char *)malloc(sound_file_size[number]); fread(snd_buffer[number],sound_file_size[number],1,fp_filename); //printf("%d\n",sound_file_size[number]); fclose(fp_filename); // free(snd_buffer[number]); } void play_sound_routine(void) { struct MsgPort *AHImp_sound = NULL; struct AHIRequest *AHIios_sound[2] = {NULL,NULL}; struct AHIRequest *AHIio_sound = NULL; BYTE AHIDevice_sound = -1; //ULONG sig_sound; //-----open/setup ahi if((AHImp_sound=CreateMsgPort()) != NULL) { if((AHIio_sound=(struct AHIRequest *)CreateIORequest(AHImp_sound,sizeof(struct AHIRequest))) != NULL) { AHIio_sound->ahir_Version = 4; AHIDevice_sound=OpenDevice(AHINAME,0,(struct IORequest *)AHIio_sound,0); } } if(AHIDevice_sound) { Printf("Unable to open %s/0 version 4\n",AHINAME); goto sound_panic; } AHIios_sound[0]=AHIio_sound; SetIoErr(0); AHIios_sound[0]->ahir_Std.io_Message.mn_Node.ln_Pri = 127; AHIios_sound[0]->ahir_Std.io_Command = CMD_WRITE; AHIios_sound[0]->ahir_Std.io_Data = snd_buffer[number];//sndbuf; AHIios_sound[0]->ahir_Std.io_Length = sound_file_size[number];//fib_snd.fib_Size; AHIios_sound[0]->ahir_Std.io_Offset = 0; AHIios_sound[0]->ahir_Frequency = 8000;//44100; AHIios_sound[0]->ahir_Type = AHIST_M8S;//AHIST_M16S; AHIios_sound[0]->ahir_Volume = 0x10000; // Full volume AHIios_sound[0]->ahir_Position = 0x8000; // Centered AHIios_sound[0]->ahir_Link = NULL; DoIO((struct IORequest *) AHIios_sound[0]); sound_panic: //printf("are we on sound_exit?\n"); if(!AHIDevice_sound) CloseDevice((struct IORequest *)AHIio_sound); DeleteIORequest((struct IORequest *)AHIio_sound); DeleteMsgPort(AHImp_sound); sound_player_done = 1; } void stop_sound(void) { Signal(&sound_player->pr_Task, SIGBREAKF_CTRL_C ); while(sound_player_done !=1){}; sound_player_done=0; } void play_sound(int num) { number=num; #ifdef __MORPHOS__ sound_player = CreateNewProcTags( NP_Entry, &play_sound_routine, NP_Priority, 1, NP_Name, "Ahi raw-sound-player Process", // NP_Input, Input(), // NP_CloseInput, FALSE, // NP_Output, Output(), // NP_CloseOutput, FALSE, NP_CodeType, CODETYPE_PPC, TAG_DONE); #else sound_player = CreateNewProcTags( NP_Entry, &play_sound_routine, NP_Priority, 1, NP_Name, "Ahi raw-sound-player Process", // NP_Input, Input(), // NP_CloseInput, FALSE, // NP_Output, Output(), // NP_CloseOutput, FALSE, TAG_DONE); #endif Delay(10); // little delay for make sounds finish }
用于音乐播放的低级方法
These steps will allow you to use low-level AHI functions: - Create message port and AHIRequest with appropriate functions from exec.library. - Open the device with OpenDevice() giving AHI_NO_UNIT as a unit. - Get interface to the library with GetInterface() giving as the first parameter io_Device field of IORequest. struct AHIIFace *IAHI; struct Library *AHIBase; struct AHIRequest *ahi_request; struct MsgPort *mp; if (mp = IExec->CreateMsgPort()) { if (ahi_request = (struct AHIRequest *)IExec->CreateIORequest(mp, sizeof(struct AHIRequest))) { ahi_request->ahir_Version = 4; if (IExec->OpenDevice("ahi.device", AHI_NO_UNIT, (struct IORequest *)ahi_request, 0) == 0) { AHIBase = (struct Library *)ahi_request->ahir_Std.io_Device; if (IAHI = (struct AHIIFace *)IExec->GetInterface(AHIBase, "main", 1, NULL)) { // Interface got, we can now use AHI functions // ... // Once we are done we have to drop interface and free resources IExec->DropInterface((struct Interface *)IAHI); } IExec->CloseDevice((struct IORequest *)ahi_request); } IExec->DeleteIORequest((struct IORequest *)ahi_request); } IExec->DeleteMsgPort(mp); }
Once you get the AHI interface, its functions can be used. To start playing sounds you need to allocate audio (optionally you can ask user for Audio mode and frequency). Then you need to Load samples to use with AHI. You do it with AHI_AllocAudio(), AHI_ControlAudio() and AHI_LoadSound(). struct AHIAudioCtrl *ahi_ctrl; if (ahi_ctrl = IAHI->AHI_AllocAudio( AHIA_AudioID, AHI_DEFAULT_ID, AHIA_MixFreq, AHI_DEFAULT_FREQ, AHIA_Channels, NUMBER_OF_CHANNELS, // the desired number of channels AHIA_Sounds, NUMBER_OF_SOUNDS, // maximum number of sounds used TAG_DONE)) { IAHI->AHI_ControlAudio(ahi_ctrl, AHIC_Play, TRUE, TAG_DONE); int i; for (i = 0; i < NUMBER_OF_SOUNDS; i++) { // These variables need to be initialized uint32 type; APTR samplearray; uint32 length; struct AHISampleInfo sample; sample.ahisi_Type = type; // where type is the type of sample, for example AHIST_M8S for 8-bit mono sound sample.ahisi_Address = samplearray; // where samplearray must point to sample data sample.ahisi_Length = length / IAHI->AHI_SampleFrameSize(type); if (IAHI->AHI_LoadSound(i + 1, AHIST_SAMPLE, &sample, ahi_ctrl)) != 0) { // error while loading sound, cleanup } } // everything OK, play the sounds // ... // then unload sounds and free the audio for (i = 0; i < NUMBER_OF_SOUNDS; i++) IAHI->AHI_UnloadSound(i + 1, ahi_ctrl); IAHI->AHI_ControlAudio(ahi_ctrl, AHIC_Play, FALSE, TAG_DONE); IAHI->AHI_FreeAudio(ahi_ctrl); }
使用 AHI_SetVol() 设置音量,AHI_SetFreq() 设置频率,AHI_SetSound() 播放声音。
#include <devices/ahi.h> #include <dos/dostags.h> #include <proto/dos.h> #include <proto/exec.h> #include <proto/ptplay.h> struct UserArgs { STRPTR file; LONG *freq; }; CONST TEXT Version[] = "$VER: ShellPlayer 1.0 (4.4.06)"; STATIC struct Library *PtPlayBase; STATIC struct Task *maintask; STATIC APTR modptr; STATIC LONG frequency; STATIC VOLATILE int player_done = 0; STATIC VOID AbortAHI(struct MsgPort *port, struct IORequest *r1, struct IORequest *r2) { if (!CheckIO(r1)) { AbortIO(r1); WaitIO(r1); } if (!CheckIO(r2)) { AbortIO(r2); WaitIO(r2); } GetMsg(port); GetMsg(port); } STATIC VOID StartAHI(struct AHIRequest *r1, struct AHIRequest *r2, WORD *buf1, WORD *buf2) { PtRender(modptr, (BYTE *)(buf1), (BYTE *)(buf1+1), 4, frequency, 1, 16, 2); PtRender(modptr, (BYTE *)(buf2), (BYTE *)(buf2+1), 4, frequency, 1, 16, 2); r1->ahir_Std.io_Command = CMD_WRITE; r1->ahir_Std.io_Offset = 0; r1->ahir_Std.io_Data = buf1; r1->ahir_Std.io_Length = frequency*2*2; r2->ahir_Std.io_Command = CMD_WRITE; r2->ahir_Std.io_Offset = 0; r2->ahir_Std.io_Data = buf2; r2->ahir_Std.io_Length = frequency*2*2; r1->ahir_Link = NULL; r2->ahir_Link = r1; SendIO((struct IORequest *)r1); SendIO((struct IORequest *)r2); } STATIC VOID PlayerRoutine(void) { struct AHIRequest req1, req2; struct MsgPort *port; WORD *buf1, *buf2; buf1 = AllocVec(frequency*2*2, MEMF_ANY); buf2 = AllocVec(frequency*2*2, MEMF_ANY); if (buf1 && buf2) { port = CreateMsgPort(); if (port) { req1.ahir_Std.io_Message.mn_Node.ln_Pri = 0; req1.ahir_Std.io_Message.mn_ReplyPort = port; req1.ahir_Std.io_Message.mn_Length = sizeof(req1); req1.ahir_Version = 2; if (OpenDevice("ahi.device", 0, (struct IORequest *)&req1, 0) == 0) { req1.ahir_Type = AHIST_S16S; req1.ahir_Frequency = frequency; req1.ahir_Volume = 0x10000; req1.ahir_Position = 0x8000; CopyMem(&req1, &req2, sizeof(struct AHIRequest)); StartAHI(&req1, &req2, buf1, buf2); for (;;) { struct AHIRequest *io; ULONG sigs; sigs = Wait(SIGBREAKF_CTRL_C | 1 << port->mp_SigBit); if (sigs & SIGBREAKF_CTRL_C) break; if ((io = (struct AHIRequest *)GetMsg(port))) { if (GetMsg(port)) { // Both IO request finished, restart StartAHI(&req1, &req2, buf1, buf2); } else { APTR link; WORD *buf; if (io == &req1) { link = &req2; buf = buf1; } else { link = &req1; buf = buf2; } PtRender(modptr, (BYTE *)buf, (BYTE *)(buf+1), 4, frequency, 1, 16, 2); io->ahir_Std.io_Command = CMD_WRITE; io->ahir_Std.io_Offset = 0; io->ahir_Std.io_Length = frequency*2*2; io->ahir_Std.io_Data = buf; io->ahir_Link = link; SendIO((struct IORequest *)io); } } } AbortAHI(port, (struct IORequest *)&req1, (struct IORequest *)&req2); CloseDevice((struct IORequest *)&req1); } DeleteMsgPort(port); } } FreeVec(buf1); FreeVec(buf2); Forbid(); player_done = 1; Signal(maintask, SIGBREAKF_CTRL_C); } int main(void) { struct RDArgs *args; struct UserArgs params; int rc = RETURN_FAIL; maintask = FindTask(NULL); args = ReadArgs("FILE/A,FREQ/K/N", (IPTR *)¶ms, NULL); if (args) { PtPlayBase = OpenLibrary("ptplay.library", 0); if (PtPlayBase) { BPTR fh; if (params.freq) { frequency = *params.freq; } if (frequency < 4000 || frequency > 96000) frequency = 48000; fh = Open(params.file, MODE_OLDFILE); if (fh) { struct FileInfoBlock fib; APTR buf; ExamineFH(fh, &fib); buf = AllocVec(fib.fib_Size, MEMF_ANY); if (buf) { Read(fh, buf, fib.fib_Size); } Close(fh); if (buf) { ULONG type; type = PtTest(params.file, buf, 1200); modptr = PtInit(buf, fib.fib_Size, frequency, type); if (modptr) { struct Process *player; player = CreateNewProcTags( NP_Entry, &PlayerRoutine, NP_Priority, 1, NP_Name, "Player Process", #ifdef __MORPHOS__ NP_CodeType, CODETYPE_PPC, #endif TAG_DONE); if (player) { rc = RETURN_OK; Printf("Now playing \033[1m%s\033[22m at %ld Hz... Press CTRL-C to abort.\n", params.file, frequency); do { Wait(SIGBREAKF_CTRL_C); Forbid(); if (!player_done) { Signal(&player->pr_Task, SIGBREAKF_CTRL_C); } Permit(); } while (!player_done); } PtCleanup(modptr); } else { PutStr("Unknown file!\n"); } } else { PutStr("Not enough memory!\n"); } } else { PutStr("Could not open file!\n"); } CloseLibrary(PtPlayBase); } FreeArgs(args); } if (rc = RETURN_FAIL) PrintFault(IoErr(), NULL); return rc; }
其他示例
[edit | edit source]主音量实用程序
任何人都可以制作这样一个相对简单的实用程序。只需使用主音量结构调用 AHI_SetEffect() 即可。您可以使用滑块创建一个窗口,并轻松调用该函数。
您写入 AHI 设备,AHI 将写入声卡、本机硬件,甚至写入文件。这些选项由用户配置。AHI 还执行软件混音任务,以便可以同时播放多个声音。
AHI 为音频提供四个“单元”。这使得程序可以在本机硬件上播放,另一个程序可以在声卡上播放,方法是将相应的 AHI 驱动程序附加到一个单元编号。对于软件开发人员,AHI 提供两种播放音频的方法。一种是 AUDIO: DOS 设备。AHI 可以创建一个名为 AUDIO: 的卷,其工作方式类似于 AmigaDOS 卷。您可以直接读取和写入数据到该卷,它将通过扬声器播放。这是编写 PCM 的最简单方法,但不是最好的方法。
首先,如果用户从 mountlist 中删除了 AUDIO: 条目,您的程序将无法运行,您将收到大量愚蠢的支持问题,例如“我已经安装了 AHI,为什么它无法运行?”。更好的选择是向 AHI 发送 IORequest。这使您可以在程序运行时控制音量和平衡设置(使用 AUDIO: 您在打开文件时设置这些设置,并且无法在不关闭和重新打开 AUDIO: 的情况下更改它们),并且您可以使用一种称为双缓冲的巧妙技巧来提高效率。双缓冲使您可以在播放另一个音频缓冲区的同时填充一个音频缓冲区。这种异步操作可以防止在速度较慢的系统上出现“卡顿”的音频。
我们初始化 AHI,然后准备并发送 AHI 请求到 ahi.device。
计算您希望 AHI 从缓冲区读取的字节数非常重要。如果您计算错误,可能会导致严重的崩溃!为此,将 PCM 计数乘以通道数乘以 AHI 缓冲区数。
关于音量和位置的快速说明:AHI 使用一种相当神秘的数据类型,称为 Fixed。Fixed 数字由 32 位组成:一个符号位、一个 15 位整数部分和一个 16 位小数部分。在构建 AHI 请求时,我将此数字乘以 0x00010000,以将其转换为 fixed 值。如果我在 DOS 后台进程中使用此代码,我可以动态更改音量和平衡,以便排队的下一个样本将以更大声或更小的声音播放。也可以中断 AHI,使更改立即生效,但我不会详细介绍这一点。
发送请求后,我们将必需的位放到位,以检查 CTRL-C 和任何 AHI 中断消息。然后是时候交换缓冲区了。
/*
* Copyright (C) 2005 Mark Olsen
*
* This program is free software; you can redistribute it and/or
* modify it under the terms of the GNU General Public License
* as published by the Free Software Foundation; either version 2
* of the License, or (at your option) any later version.
*
* This program is distributed in the hope that it will be useful,
* but WITHOUT ANY WARRANTY; without even the implied warranty of
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
* GNU General Public License for more details.
*
* You should have received a copy of the GNU General Public License
* along with this program; if not, write to the Free Software
* Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301, USA.
*/
#include <exec/exec.h>
#include <devices/ahi.h>
#include <proto/exec.h>
#define USE_INLINE_STDARG
#include <proto/ahi.h>
#include <utility/hooks.h>
#include "../game/q_shared.h"
#include "../client/snd_local.h"
struct AHIdata *ad;
struct AHIChannelInfo
{
struct AHIEffChannelInfo aeci;
ULONG offset;
};
struct AHIdata
{
struct MsgPort *msgport;
struct AHIRequest *ahireq;
int ahiopen;
struct AHIAudioCtrl *audioctrl;
void *samplebuffer;
struct Hook EffectHook;
struct AHIChannelInfo aci;
unsigned int readpos;
};
#if !defined(__AROS__)
ULONG EffectFunc()
{
struct Hook *hook = (struct Hook *)REG_A0;
struct AHIEffChannelInfo *aeci = (struct AHIEffChannelInfo *)REG_A1;
struct AHIdata *ad;
ad = hook->h_Data;
ad->readpos = aeci->ahieci_Offset[0];
return 0;
}
static struct EmulLibEntry EffectFunc_Gate =
{
TRAP_LIB, 0, (void (*)(void))EffectFunc
};
#else
AROS_UFH3(ULONG, EffectFunc,
AROS_UFHA(struct Hook *, hook, A0),
AROS_UFHA(struct AHIAudioCtrl *, aac, A2),
AROS_UFHA(struct AHIEffChannelInfo *, aeci, A1)
)
{
AROS_USERFUNC_INIT
struct AHIdata *ad;
ad = hook->h_Data;
ad->readpos = aeci->ahieci_Offset[0];
return 0;
AROS_USERFUNC_EXIT
}
#endif
qboolean SNDDMA_Init(void)
{
ULONG channels;
ULONG speed;
ULONG bits;
ULONG r;
struct Library *AHIBase;
struct AHISampleInfo sample;
cvar_t *sndbits;
cvar_t *sndspeed;
cvar_t *sndchannels;
char modename[64];
if (ad)
return;
sndbits = Cvar_Get("sndbits", "16", CVAR_ARCHIVE);
sndspeed = Cvar_Get("sndspeed", "0", CVAR_ARCHIVE);
sndchannels = Cvar_Get("sndchannels", "2", CVAR_ARCHIVE);
speed = sndspeed->integer;
if (speed == 0)
speed = 22050;
ad = AllocVec(sizeof(*ad), MEMF_ANY);
if (ad)
{
ad->msgport = CreateMsgPort();
if (ad->msgport)
{
ad->ahireq = (struct AHIRequest *)CreateIORequest(ad->msgport, sizeof(struct AHIRequest));
if (ad->ahireq)
{
ad->ahiopen = !OpenDevice("ahi.device", AHI_NO_UNIT, (struct IORequest *)ad->ahireq, 0);
if (ad->ahiopen)
{
AHIBase = (struct Library *)ad->ahireq->ahir_Std.io_Device;
ad->audioctrl = AHI_AllocAudio(AHIA_AudioID, AHI_DEFAULT_ID,
AHIA_MixFreq, speed,
AHIA_Channels, 1,
AHIA_Sounds, 1,
TAG_END);
if (ad->audioctrl)
{
AHI_GetAudioAttrs(AHI_INVALID_ID, ad->audioctrl,
AHIDB_BufferLen, sizeof(modename),
AHIDB_Name, (ULONG)modename,
AHIDB_MaxChannels, (ULONG)&channels,
AHIDB_Bits, (ULONG)&bits,
TAG_END);
AHI_ControlAudio(ad->audioctrl,
AHIC_MixFreq_Query, (ULONG)&speed,
TAG_END);
if (bits == 8 || bits == 16)
{
if (channels > 2)
channels = 2;
dma.speed = speed;
dma.samplebits = bits;
dma.channels = channels;
#if !defined(__AROS__)
dma.samples = 2048*(speed/11025);
#else
dma.samples = 16384*(speed/11025);
#endif
dma.submission_chunk = 1;
#if !defined(__AROS__)
ad->samplebuffer = AllocVec(2048*(speed/11025)*(bits/8)*channels, MEMF_ANY);
#else
ad->samplebuffer = AllocVec(16384*(speed/11025)*(bits/8)*channels, MEMF_ANY);
#endif
if (ad->samplebuffer)
{
dma.buffer = ad->samplebuffer;
if (channels == 1)
{
if (bits == 8)
sample.ahisi_Type = AHIST_M8S;
else
sample.ahisi_Type = AHIST_M16S;
}
else
{
if (bits == 8)
sample.ahisi_Type = AHIST_S8S;
else
sample.ahisi_Type = AHIST_S16S;
}
sample.ahisi_Address = ad->samplebuffer;
#if !defined(__AROS__)
sample.ahisi_Length = (2048*(speed/11025)*(bits/8))/AHI_SampleFrameSize(sample.ahisi_Type);
#else
sample.ahisi_Length = (16384*(speed/11025)*(bits/8))/AHI_SampleFrameSize(sample.ahisi_Type);
#endif
r = AHI_LoadSound(0, AHIST_DYNAMICSAMPLE, &sample, ad->audioctrl);
if (r == 0)
{
r = AHI_ControlAudio(ad->audioctrl,
AHIC_Play, TRUE,
TAG_END);
if (r == 0)
{
AHI_Play(ad->audioctrl,
AHIP_BeginChannel, 0,
AHIP_Freq, speed,
AHIP_Vol, 0x10000,
AHIP_Pan, 0x8000,
AHIP_Sound, 0,
AHIP_EndChannel, NULL,
TAG_END);
ad->aci.aeci.ahie_Effect = AHIET_CHANNELINFO;
ad->aci.aeci.ahieci_Func = &ad->EffectHook;
ad->aci.aeci.ahieci_Channels = 1;
#if !defined(__AROS__)
ad->EffectHook.h_Entry = (void *)&EffectFunc_Gate;
#else
ad->EffectHook.h_Entry = (IPTR (*)())&EffectFunc;
#endif
ad->EffectHook.h_Data = ad;
AHI_SetEffect(&ad->aci, ad->audioctrl);
Com_Printf("Using AHI mode \"%s\" for audio output\n", modename);
Com_Printf("Channels: %d bits: %d frequency: %d\n", channels, bits, speed);
return 1;
}
}
}
FreeVec(ad->samplebuffer);
}
AHI_FreeAudio(ad->audioctrl);
}
else
Com_Printf("Failed to allocate AHI audio\n");
CloseDevice((struct IORequest *)ad->ahireq);
}
DeleteIORequest((struct IORequest *)ad->ahireq);
}
DeleteMsgPort(ad->msgport);
}
FreeVec(ad);
}
return 0;
}
int SNDDMA_GetDMAPos(void)
{
return ad->readpos*dma.channels;
}
void SNDDMA_Shutdown(void)
{
struct Library *AHIBase;
if (ad == 0)
return;
AHIBase = (struct Library *)ad->ahireq->ahir_Std.io_Device;
ad->aci.aeci.ahie_Effect = AHIET_CHANNELINFO|AHIET_CANCEL;
AHI_SetEffect(&ad->aci.aeci, ad->audioctrl);
AHI_ControlAudio(ad->audioctrl,
AHIC_Play, FALSE,
TAG_END);
AHI_FreeAudio(ad->audioctrl);
FreeVec(ad->samplebuffer);
CloseDevice((struct IORequest *)ad->ahireq);
DeleteIORequest((struct IORequest *)ad->ahireq);
DeleteMsgPort(ad->msgport);
FreeVec(ad);
ad = 0;
}
void SNDDMA_Submit(void)
{
}
void SNDDMA_BeginPainting (void)
{
}
/*
Copyright (C) 2006-2007 Mark Olsen
This program is free software; you can redistribute it and/or
modify it under the terms of the GNU General Public License
as published by the Free Software Foundation; either Version 2
of the License, or (at your option) any later version.
This program is distributed in the hope that it will be useful,
but WITHOUT ANY WARRANTY; without even the implied warranty of
MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.
See the GNU General Public License for more details.
You should have received a copy of the GNU General Public License
along with this program; if not, write to the Free Software
Foundation, Inc., 59 Temple Place - Suite 330, Boston, MA 02111-1307, USA.
*/
#include <exec/exec.h>
#include <devices/ahi.h>
#include <proto/exec.h>
#define USE_INLINE_STDARG
#include <proto/ahi.h>
#include "quakedef.h"
#include "sound.h"
struct AHIChannelInfo
{
struct AHIEffChannelInfo aeci;
ULONG offset;
};
struct ahi_private
{
struct MsgPort *msgport;
struct AHIRequest *ahireq;
struct AHIAudioCtrl *audioctrl;
void *samplebuffer;
struct Hook EffectHook;
struct AHIChannelInfo aci;
unsigned int readpos;
};
ULONG EffectFunc()
{
struct Hook *hook = (struct Hook *)REG_A0;
struct AHIEffChannelInfo *aeci = (struct AHIEffChannelInfo *)REG_A1;
struct ahi_private *p;
p = hook->h_Data;
p->readpos = aeci->ahieci_Offset[0];
return 0;
}
static struct EmulLibEntry EffectFunc_Gate =
{
TRAP_LIB, 0, (void (*)(void))EffectFunc
};
void ahi_shutdown(struct SoundCard *sc)
{
struct ahi_private *p = sc->driverprivate;
struct Library *AHIBase;
AHIBase = (struct Library *)p->ahireq->ahir_Std.io_Device;
p->aci.aeci.ahie_Effect = AHIET_CHANNELINFO|AHIET_CANCEL;
AHI_SetEffect(&p->aci.aeci, p->audioctrl);
AHI_ControlAudio(p->audioctrl,
AHIC_Play, FALSE,
TAG_END);
AHI_FreeAudio(p->audioctrl);
CloseDevice((struct IORequest *)p->ahireq);
DeleteIORequest((struct IORequest *)p->ahireq);
DeleteMsgPort(p->msgport);
FreeVec(p->samplebuffer);
FreeVec(p);
}
int ahi_getdmapos(struct SoundCard *sc)
{
struct ahi_private *p = sc->driverprivate;
sc->samplepos = p->readpos*sc->channels;
return sc->samplepos;
}
void ahi_submit(struct SoundCard *sc, unsigned int count)
{
}
qboolean ahi_init(struct SoundCard *sc, int rate, int channels, int bits)
{
struct ahi_private *p;
ULONG r;
char name[64];
struct Library *AHIBase;
struct AHISampleInfo sample;
p = AllocVec(sizeof(*p), MEMF_ANY);
if (p)
{
p->msgport = CreateMsgPort();
if (p->msgport)
{
p->ahireq = (struct AHIRequest *)CreateIORequest(p->msgport, sizeof(struct AHIRequest));
if (p->ahireq)
{
r = !OpenDevice("ahi.device", AHI_NO_UNIT, (struct IORequest *)p->ahireq, 0);
if (r)
{
AHIBase = (struct Library *)p->ahireq->ahir_Std.io_Device;
p->audioctrl = AHI_AllocAudio(AHIA_AudioID, AHI_DEFAULT_ID,
AHIA_MixFreq, rate,
AHIA_Channels, 1,
AHIA_Sounds, 1,
TAG_END);
if (p->audioctrl)
{
AHI_GetAudioAttrs(AHI_INVALID_ID, p->audioctrl,
AHIDB_BufferLen, sizeof(name),
AHIDB_Name, (ULONG)name,
AHIDB_MaxChannels, (ULONG)&channels,
AHIDB_Bits, (ULONG)&bits,
TAG_END);
AHI_ControlAudio(p->audioctrl,
AHIC_MixFreq_Query, (ULONG)&rate,
TAG_END);
if (bits == 8 || bits == 16)
{
if (channels > 2)
channels = 2;
sc->speed = rate;
sc->samplebits = bits;
sc->channels = channels;
sc->samples = 16384*(rate/11025);
p->samplebuffer = AllocVec(16384*(rate/11025)*(bits/8)*channels, MEMF_CLEAR);
if (p->samplebuffer)
{
sc->buffer = p->samplebuffer;
if (channels == 1)
{
if (bits == 8)
sample.ahisi_Type = AHIST_M8S;
else
sample.ahisi_Type = AHIST_M16S;
}
else
{
if (bits == 8)
sample.ahisi_Type = AHIST_S8S;
else
sample.ahisi_Type = AHIST_S16S;
}
sample.ahisi_Address = p->samplebuffer;
sample.ahisi_Length = (16384*(rate/11025)*(bits/8))/AHI_SampleFrameSize(sample.ahisi_Type);
r = AHI_LoadSound(0, AHIST_DYNAMICSAMPLE, &sample, p->audioctrl);
if (r == 0)
{
r = AHI_ControlAudio(p->audioctrl,
AHIC_Play, TRUE,
TAG_END);
if (r == 0)
{
AHI_Play(p->audioctrl,
AHIP_BeginChannel, 0,
AHIP_Freq, rate,
AHIP_Vol, 0x10000,
AHIP_Pan, 0x8000,
AHIP_Sound, 0,
AHIP_EndChannel, NULL,
TAG_END);
p->aci.aeci.ahie_Effect = AHIET_CHANNELINFO;
p->aci.aeci.ahieci_Func = &p->EffectHook;
p->aci.aeci.ahieci_Channels = 1;
p->EffectHook.h_Entry = (void *)&EffectFunc_Gate;
p->EffectHook.h_Data = p;
AHI_SetEffect(&p->aci, p->audioctrl);
Com_Printf("Using AHI mode \"%s\" for audio output\n", name);
Com_Printf("Channels: %d bits: %d frequency: %d\n", channels, bits, rate);
sc->driverprivate = p;
sc->GetDMAPos = ahi_getdmapos;
sc->Submit = ahi_submit;
sc->Shutdown = ahi_shutdown;
return 1;
}
}
}
FreeVec(p->samplebuffer);
}
AHI_FreeAudio(p->audioctrl);
}
else
Com_Printf("Failed to allocate AHI audio\n");
CloseDevice((struct IORequest *)p->ahireq);
}
DeleteIORequest((struct IORequest *)p->ahireq);
}
DeleteMsgPort(p->msgport);
}
FreeVec(p);
}
return 0;
}
SoundInitFunc AHI_Init = ahi_init;
钩子
[edit | edit source]一个旧的想法,如果可能,最好避免。钩子函数应用于播放/控制样本。它以初始化的频率(在本例中为 100)调用。
因此,在您的“普通”代码中,您将在某个地方翻转一个开关,告诉钩子函数开始播放样本(或对其执行您想要的任何操作)。
然后,在钩子函数中,您开始播放样本,并使用 ahi ctrl 函数(以及其他函数)应用效果。
例如,一个这样的例子可能是模块(现在是 .mod 文件格式)数据正在为每个通道处理,并应用效果等。
在您的情况下,它将更像是 mod 播放器。您想要开始播放一个音符,并在需要时停止它。例如,当计数器达到某个值时。
您的数字未打印的(可能)原因是该例程每秒被调用_很多_次
关键是您必须找到一种机制(最适合您的目的),该机制使用鼠标点击(或按键点击)来“馈送”播放器(钩子函数),并找到一种机制,让播放器对播放的样本执行其他操作(停止它、应用效果等)。
您可以使用钩子函数的数据属性将结构“传递”/推送到您的“回放”例程,以便您可以例如告诉播放器某个样本已开始播放。然后,播放器可以决定(例如,如果计数器达到某个值)实际停止播放样本,并在该结构中设置/更改状态,以便主程序知道可以“播放”/再次触发样本。
#include "backends/platform/amigaos3/amigaos3.h"
#include "backends/mixer/amigaos3/amigaos3-mixer.h"
#include "common/debug.h"
#include "common/system.h"
#include "common/config-manager.h"
#include "common/textconsole.h"
// Amiga includes
#include <clib/exec_protos.h>
#include "ahi-player-hook.h"
#define DEFAULT_MIX_FREQUENCY 11025
AmigaOS3MixerManager* g_mixerManager;
static void audioPlayerCallback() {
g_mixerManager->callbackHandler();
}
AmigaOS3MixerManager::AmigaOS3MixerManager()
:
_mixer(0),
_audioSuspended(false) {
g_mixerManager = this;
}
AmigaOS3MixerManager::~AmigaOS3MixerManager() {
if (_mixer) {
_mixer->setReady(false);
if (audioCtrl) {
debug(1, "deleting AHI_ControlAudio");
// Stop sounds.
AHI_ControlAudio(audioCtrl, AHIC_Play, FALSE, TAG_DONE);
if (_mixer) {
_mixer->setReady(false);
}
AHI_UnloadSound(0, audioCtrl);
AHI_FreeAudio(audioCtrl);
audioCtrl = NULL;
}
if (audioRequest) {
debug(1, "deleting AHIDevice");
CloseDevice((struct IORequest*)audioRequest);
DeleteIORequest((struct IORequest*)audioRequest);
audioRequest = NULL;
DeleteMsgPort(audioPort);
audioPort = NULL;
AHIBase = NULL;
}
if (sample.ahisi_Address) {
debug(1, "deleting soundBuffer");
FreeVec(sample.ahisi_Address);
sample.ahisi_Address = NULL;
}
delete _mixer;
}
}
void AmigaOS3MixerManager::init() {
audioPort = (struct MsgPort*)CreateMsgPort();
if (!audioPort) {
error("Could not create a Message Port for AHI");
}
audioRequest = (struct AHIRequest*)CreateIORequest(audioPort, sizeof(struct AHIRequest));
if (!audioRequest) {
error("Could not create an IO Request for AHI");
}
// Open at least version 4.
audioRequest->ahir_Version = 4;
BYTE deviceError = OpenDevice(AHINAME, AHI_NO_UNIT, (struct IORequest*)audioRequest, NULL);
if (deviceError) {
error("Unable to open AHI Device: %s version 4", AHINAME);
}
// Needed by Audio Control?
AHIBase = (struct Library *)audioRequest->ahir_Std.io_Device;
uint32 desiredMixingfrequency = 0;
// Determine the desired output sampling frequency.
if (ConfMan.hasKey("output_rate")) {
desiredMixingfrequency = ConfMan.getInt("output_rate");
}
if (desiredMixingfrequency == 0) {
desiredMixingfrequency = DEFAULT_MIX_FREQUENCY;
}
ULONG audioId = AHI_DEFAULT_ID;
audioCtrl = AHI_AllocAudio(
AHIA_AudioID, audioId,
AHIA_MixFreq, desiredMixingfrequency,
AHIA_Channels, numAudioChannels,
AHIA_Sounds, 1,
AHIA_PlayerFunc, createAudioPlayerCallback(audioPlayerCallback),
AHIA_PlayerFreq, audioCallbackFrequency<<16,
AHIA_MinPlayerFreq, audioCallbackFrequency<<16,
AHIA_MaxPlayerFreq, audioCallbackFrequency<<16,
TAG_DONE);
if (!audioCtrl) {
error("Could not create initialize AHI");
}
// Get obtained mixing frequency.
ULONG obtainedMixingfrequency = 0;
AHI_ControlAudio(audioCtrl, AHIC_MixFreq_Query, (Tag)&obtainedMixingfrequency, TAG_DONE);
debug(5, "Mixing frequency desired = %d Hz", desiredMixingfrequency);
debug(5, "Mixing frequency obtained = %d Hz", obtainedMixingfrequency);
// Calculate the sample factor.
ULONG sampleCount = (ULONG)floor(obtainedMixingfrequency / audioCallbackFrequency);
debug(5, "Calculated sample rate @ %u times per second = %u", audioCallbackFrequency, sampleCount);
// 32 bits (4 bytes) are required per sample for storage (16bit stereo).
sampleBufferSize = (sampleCount * AHI_SampleFrameSize(AHIST_S16S));
sample.ahisi_Type = AHIST_S16S;
sample.ahisi_Address = AllocVec(sampleBufferSize, MEMF_PUBLIC|MEMF_CLEAR);
sample.ahisi_Length = sampleCount;
AHI_SetFreq(0, obtainedMixingfrequency, audioCtrl, AHISF_IMM);
AHI_SetVol(0, 0x10000L, 0x8000L, audioCtrl, AHISF_IMM);
AHI_LoadSound(0, AHIST_DYNAMICSAMPLE, &sample, audioCtrl);
AHI_SetSound(0, 0, 0, 0, audioCtrl, AHISF_IMM);
// Create the mixer instance and start the sound processing.
assert(!_mixer);
_mixer = new Audio::MixerImpl(g_system, obtainedMixingfrequency);
assert(_mixer);
_mixer->setReady(true);
// Start feeding samples to sound hardware (and start the AHI callback!)
AHI_ControlAudio(audioCtrl, AHIC_Play, TRUE, TAG_DONE);
}
void AmigaOS3MixerManager::callbackHandler() {
assert(_mixer);
_mixer->mixCallback((byte*)sample.ahisi_Address, sampleBufferSize);
}
void AmigaOS3MixerManager::suspendAudio() {
AHI_ControlAudio(audioCtrl, AHIC_Play, FALSE, TAG_DONE);
_audioSuspended = true;
}
int AmigaOS3MixerManager::resumeAudio() {
if (!_audioSuspended) {
return -2;
}
AHI_ControlAudio(audioCtrl, AHIC_Play, TRUE, TAG_DONE);
_audioSuspended = false;
return 0;
}
AmiArcadia 也使用 AHI,并且包含来自 AmiNet 的 c 源代码和 ScummVM AGA....所有源代码都在这里。要创建 AHI 回调钩子,您还需要包含 SDI 头文件。
参考文献
[edit | edit source]仍需编辑,可能需要重新制作...
您可能需要提供 AHI_MinPlayerFreq 和 AHI_MaxPlayerFreq?
AHIA_PlayerFreq (固定) - 如果非零,则启用计时并指定 PlayerFunc 每秒调用多少次。如果指定了 AHIA_PlayerFunc,则必须指定此值。不要使用任何极端频率。MixFreq/PlayerFreq 的结果必须适合 UWORD,即它必须小于或等于 65535。建议您将结果保持在 80 以上。在正常使用中,这应该不成问题。注意数据类型是 Fixed,而不是整数。50 Hz 是 50 16。
默认值是合理的。不要依赖它。
AHIA_MinPlayerFreq (固定) - 您将使用的最小频率 (AHIA_PlayerFreq)。如果您使用设备的中断功能,则必须提供此值!
AHIA_MaxPlayerFreq (固定) - 您将使用的最大频率 (AHIA_PlayerFreq)。如果您使用设备的中断功能,则必须提供此值!
我在文档中没有看到任何限制高端频率的内容,只要记住 AHI 必须能够及时完成回调函数,才能进行下一个回调。您的回调函数有多大?
AHI_GetAudioAttrs() 应该有 TAG_DONE
什么是 AHIR_DoMixFreq?从 AHI_AllocAudio() 调用中删除 AHIR_DoMixFreq 标签,我认为它不应该在那里。
将音频解码到样本缓冲区,并使用来自子进程的正常双缓冲方法将缓冲区馈送到 AHI。AHI 可以将声音缓冲在音乐中断中,以便稍后播放吗?我该如何做您建议的事情?从未使用过库 API,抱歉。我一直使用 CMD_WRITE 来播放声音。它不起作用。迟早 io 请求会由于任务切换而不同步。
关于通道数量的建议 设置
CMD_FLUSH CMD_READ CMD_RESET CMD_START CMD_STOP CMD_WRITE CloseDevice NSCMD_DEVICEQUERY OpenDevice ahi.device
if(AHI_GetAudioAttrs(AHI_DEFAULT_ID, NULL, AHIDB_BufferLen, 100, AHIDB_Inputs, &num_inputs)) //if(AHI_GetAudioAttrs(AHI_INVALID_ID, Record_AudioCtrl, AHIDB_BufferLen, 100, AHIDB_Inputs, &num_inputs)) { printf("getaudioattrs worked\n"); STRPTR input_name[num_inputs-1][100]; printf("num inputs is %i\navailable inputs:\n",num_inputs); for(int a=0; a!=num_inputs; a++) { //AHI_GetAudioAttrs(AHI_INVALID_ID, Record_AudioCtrl, AHIDB_BufferLen, 100, AHIDB_InputArg, a, AHIDB_Input, &input_name[a]); AHI_GetAudioAttrs(AHI_DEFAULT_ID, NULL, AHIDB_BufferLen, 100, AHIDB_InputArg, a, AHIDB_Input, &input_name[a]); printf("%i: %s\n",a,input_name[a]); } //AHI_ControlAudio(Record_AudioCtrl, AHIC_Input, &selected_input, TAG_DONE); //AHI_ControlAudio(Record_AudioCtrl, AHIC_Input, 1, TAG_DONE); //AHI_ControlAudio(NULL, AHIC_Input, 1, TAG_DONE); //AHI_ControlAudio(NULL, AHIC_Input, &selected_input, TAG_DONE); }
//changed second argument from.. AHIDevice=OpenDevice(AHINAME,0,(struct IORequest *)AHIio,NULL); //to this.. AHIDevice=OpenDevice(AHINAME,AHI_NO_UNIT,(struct IORequest *)AHIio,NULL);
audioctrl = AHI_AllocAudioA( tags );
struct AHIAudioCtrl *AHI_AllocAudioA( struct TagItem * );
audioctrl = AHI_AllocAudio( tag1, ... );
struct AHIAudioCtrl *AHI_AllocAudio( Tag, ... );
requester = AHI_AllocAudioRequestA( tags );
struct AHIAudioModeRequester *AHI_AllocAudioRequestA(struct TagItem * );
requester = AHI_AllocAudioRequest( tag1, ... );
struct AHIAudioModeRequester *AHI_AllocAudioRequest( Tag, ... );
success = AHI_AudioRequestA( requester, tags );
BOOL AHI_AudioRequestA( struct AHIAudioModeRequester *, struct TagItem * );
result = AHI_AudioRequest( requester, tag1, ... );
BOOL AHI_AudioRequest( struct AHIAudioModeRequester *, Tag, ... );
ID = AHI_BestAudioIDA( tags );
ULONG AHI_BestAudioIDA( struct TagItem * );
ID = AHI_BestAudioID( tag1, ... );
ULONG AHI_BestAudioID( Tag, ... );
error = AHI_ControlAudioA( audioctrl, tags );
ULONG AHI_ControlAudioA( struct AHIAudioCtrl *, struct TagItem * );
error = AHI_ControlAudio( AudioCtrl, tag1, ...);
ULONG AHI_ControlAudio( struct AHIAudioCtrl *, Tag, ... );
AHI_FreeAudio( audioctrl );
void AHI_FreeAudio( struct AHIAudioCtrl * );
AHI_FreeAudioRequest( requester );
void AHI_FreeAudioRequest( struct AHIAudioModeRequester * );
success = AHI_GetAudioAttrsA( ID, [audioctrl], tags );
BOOL AHI_GetAudioAttrsA( ULONG, struct AHIAudioCtrl *, struct TagItem * );
success = AHI_GetAudioAttrs( ID, [audioctrl], attr1, &result1, ...);
BOOL AHI_GetAudioAttrs( ULONG, struct AHIAudioCtrl *, Tag, ... );
error = AHI_LoadSound( sound, type, info, audioctrl );
ULONG AHI_LoadSound( UWORD, ULONG, IPTR, struct AHIAudioCtrl * );
next_ID = AHI_NextAudioID( last_ID );
ULONG AHI_NextAudioID( ULONG );
AHI_PlayA( audioctrl, tags );
void AHI_PlayA( struct AHIAudioCtrl *, struct TagItem * );
AHI_Play( AudioCtrl, tag1, ...);
void AHI_Play( struct AHIAudioCtrl *, Tag, ... );
size = AHI_SampleFrameSize( sampletype );
ULONG AHI_SampleFrameSize( ULONG );
error = AHI_SetEffect( effect, audioctrl );
ULONG AHI_SetEffect( IPTR, struct AHIAudioCtrl * );
AHI_SetFreq( channel, freq, audioctrl, flags );
void AHI_SetFreq( UWORD, ULONG, struct AHIAudioCtrl *, ULONG );
AHI_SetSound( channel, sound, offset, length, audioctrl, flags );
void AHI_SetSound( UWORD, UWORD, ULONG, LONG, struct AHIAudioCtrl *, ULONG );
AHI_SetVol( channel, volume, pan, audioctrl, flags );
void AHI_SetVol( UWORD, Fixed, sposition, struct AHIAudioCtrl *, ULONG );
AHI_UnloadSound( sound, audioctrl );
void AHI_UnloadSound( UWORD, struct AHIAudioCtrl * );