Integrating Audio Recording into a Ruby on Rails App Using RecordRTC and StimulusJs


Integrating Audio Recording into a Ruby on Rails App Using RecordRTC and StimulusJs


Developing applications in today’s digital age entails considering diverse forms of data exchange and communication. Audio data, a staple in many app functionalities, can provide users with an enhanced interactive experience. In this article, we will walk through the steps to integrate audio recording functionality into a Ruby on Rails application using RecordRTC and StimulusJs.

Setting the Stage

Before we dive into the code, let’s understand what these tools do:

1. **Ruby on Rails**: This popular server-side web application framework written in Ruby follows the Model-View-Controller (MVC) architectural pattern.

2. **RecordRTC**: A web-based media recording library, RecordRTC lets you record audio, video, and canvas HTML elements. It is browser-compatible and supports cross-browser audio/video recording.

3. **StimulusJs**: A minimal JavaScript framework, StimulusJs enriches your HTML code, allowing you to add behavior to your web application easily.

Now, let’s jump into the step-by-step guide to implement audio recording.

Step 1: Setting Up the Rails Application

Begin by creating a new Ruby on Rails application. If you have Rails installed, this can be done with the command:

rails new audio_recorder_app
Then, navigate to the newly created application’s directory:

cd audio_recorder_app
Step 2: Installing StimulusJs

Next, we’ll add StimulusJs into our application by using webpacker:

./bin/rails webpacker:install:stimulus
This command will install Stimulus and create a directory `app/javascript/controllers` for your Stimulus controllers.

Step 3: Installing RecordRTC

In order to use RecordRTC in our Rails application, we need to add it through Yarn:

yarn add recordrtc
Step 4: Implementing Audio Recording

n this step, we’ll use RecordRTC and StimulusJs to build an audio recorder within our Rails application. This involves several key tasks: setting up the controller, handling start/stop/pause/resume events, managing time, and creating a user interface for the recorder.

Let’s dive into the details.

First, we will create an AudioRecordingController which extends from the ApplicationController. We use the RecordRTC library for recording and StimulusReflex for real-time updates.

The connect() function initializes our controller. It registers our Stimulus controller with StimulusReflex and sets initial values for our timerInterval and secondsElapsed.

import ApplicationController from './application_controller.js'
import StimulusReflex from 'stimulus_reflex'
import RecordRTC from "recordrtc";
import {
  serializeFormData,
  triggerChange
} from "helpers";export default class extends ApplicationController {
  static targets = ["startRecording", "stopRecording", 'resumeRecording', 'pauseRecording', "recordedAudio", "audioBlob", 'timeElapsed'];

  connect() {
   StimulusReflex.register(this)
    this.timerInterval = null;
    this.secondsElapsed = 0;
  }

  startRecording(event) {
    event.preventDefault()
    if (navigator.mediaDevices && navigator.mediaDevices.getUserMedia) {
      navigator.mediaDevices.getUserMedia({ audio: true }).then(stream => {
        this.recorder = RecordRTC(stream, {
          type: 'audio',
          getBlob: function () {
            return blobURL2;
          },
           getRealBlob: function() {
              return blobURL;
            },
        });
        this.startRecorder(event);
      }).catch((error) => {
        console.log("The following error occurred: " + error);
        alert("Please grant permission for microphone access")
      });
    } else {
      alert("Your browser does not support audio recording, please use a different browser or update your current browser")
    }
  }

  pauseRecording (event) {
    event.preventDefault();
    this.recorder.pauseRecording()
    this.startRecordingTarget.disabled = true
    this.pauseRecordingTarget.disabled = true;
    this.stopRecordingTarget.disabled = false
    this.resumeRecordingTarget.disabled = false;
    this.stopTimer();
  }

  resumeRecording (event) {
    event.preventDefault();
    this.recorder.resumeRecording()
    this.startRecordingTarget.disabled = true
    this.stopRecordingTarget.disabled = false
    this.pauseRecordingTarget.disabled = false;
    this.resumeRecordingTarget.disabled = true;
    this.startTimer();
  }

  startRecorder (event) {
    var vm = this;
    event.preventDefault();
    this.recorder.startRecording();
    this.startRecordingTarget.disabled = true
    this.stopRecordingTarget.disabled = false
    this.pauseRecordingTarget.disabled = false;

    this.startTimer();
  }


  stopRecording(event) {
    event.preventDefault()
    this.recorder.stopRecording(blob => {
      console.log(blob)
      // this.recordedAudioTarget.controls = true;
    });
    this.startRecordingTarget.disabled = false
    this.stopRecordingTarget.disabled = true
    this.pauseRecordingTarget.disabled = true;
    this.resumeRecordingTarget.disabled = true;
    triggerChange(this.element);
    this.stopTimer();
    this.secondsElapsed = 0;
  }


  blobToFile(theBlob, fileName){
      return new File([theBlob], fileName, { lastModified: new Date().getTime(), type: theBlob.type })
  }

  appendFormData (formData) {
    if (!this.recorder) return formData;
    var fieldName = this.audioBlobTarget.name;
    console.log('fieldName:', fieldName);

    if (this.recorder.getBlob())
      formData.append(fieldName, this.recorder.getBlob(), (new Date()).getTime() + ".webm");

    return formData;
  }

  openRecorder () {
    return (this.stopRecordingTarget.disabled == false);
  }

  startTimer () {
    var vm = this;
    this.timerInterval = setInterval(() => {
      vm.secondsElapsed = vm.secondsElapsed + 1;
      vm.setTime();
    }, 1000);
  }

  stopTimer () {
    var vm = this;
    clearInterval(vm.timerInterval);
  }

  setTime () {
    this.timeElapsedTarget.innerHTML = `Time Elapsed: ${this.getTimeString(this.secondsElapsed)} seconds  `;
  }

  getTimeString(seconds) {
    let hours = Math.floor(seconds / 3600);
    let minutes = Math.floor((seconds % 3600) / 60);
    let remainingSeconds = seconds % 60;

    // Add leading zeroes to hours, minutes, and seconds if needed
    hours = hours < 10 ? "0" + hours : hours;
    minutes = minutes < 10 ? "0" + minutes : minutes;
    remainingSeconds = remainingSeconds < 10 ? "0" + remainingSeconds : remainingSeconds;

    return `${hours}:${minutes}:${remainingSeconds}`;
  }
}
We then create functions for handling audio recording actions: startRecording(), stopRecording(), pauseRecording(), and resumeRecording(). We use navigator.mediaDevices.getUserMedia to get access to the device's media stream. We also handle errors for browsers that do not support audio recording.

In startRecording(), we create a new instance of RecordRTC, passing the audio stream and configuration options.

The pauseRecording() and resumeRecording() methods control the pause and resume actions of the recorder, respectively. stopRecording() stops the recorder, handles the recorded blob, and resets the recording interface.

The startTimer() and stopTimer() methods manage the time recording during the recording process. setTime() and getTimeString() are utility methods used to display the elapsed time.

The appendFormData() function is used to append our audio blob to the form data before submission.

Lastly, we define our HTML markup. Here, we define our audio controller with the related targets and actions. The interface includes buttons for start, stop, pause, and resume actions, an audio player for playback, a hidden input to store the audio blob, and a paragraph to display elapsed time.

<div class="form-group audio-field" data-controller='audio-recording'>
  <div class='row'>
    <div class='col-md-12'>
      <div class="form-group" >
        <button data-action="audio-recording#startRecording" data-target="audio-recording.startRecording">Start</button>
        <button disabled data-action="audio-recording#stopRecording" data-target="audio-recording.stopRecording">Stop</button>
        <button disabled data-action="audio-recording#pauseRecording" data-target="audio-recording.pauseRecording">Pause</button>
        <button disabled data-action="audio-recording#resumeRecording" data-target="audio-recording.resumeRecording">Resume</button>
        <audio data-target="audio-recording.recordedAudio"></audio>
        <input type="hidden" data-target='audio-recording.audioBlob' value=""
                name="target_field"
                id="target_field">
        <p data-target='audio-recording.timeElapsed'> Time Elapsed: 0 second </p>
      </div>
    </div>


  </div>
</div>
And that’s it for the implementation of the audio recording! In the next step, we’ll show how to save and retrieve the audio data in the Rails application

Step 5: Saving and Retrieving Audio Data

After capturing audio from the user, the next important step is to save the audio data. We will accomplish this using a Rails controller action. Our application will use Rails’ ActiveStorage feature to handle the storage of the audio file.

Let’s walk through the create method in our RecordingsController:

def create
  @recording = Recording.new(main_object_params)

  tempfile = params[:main_object]['target_field']
  if tempfile.present?
    @conversation.voice_note.attach(io: tempfile, filename: "recording.webm", content_type: "audio/webm")
  end

  if @recording.save!
    respond_to do |format|
      format.js
    end
  else
    render :new
  end
end
First, we initialize a new instance of our Recording model using the main_object_params.

Next, we check whether a tempfile is present in the request parameters. If it is, we use Rails’ ActiveStorage’s attach method to store the audio file. The attach method takes three arguments:

  1. io: the actual file data,
  2. filename: a string that will be used as the name of the file,
  3. content_type: the type of content being stored (in our case, an audio file in WebM format).
Afterwards, we attempt to save our new Recording object using the save! method, which will raise an exception if the save does not succeed. If the save is successful, we respond with JavaScript. If the save fails, we render the new recording form again.

To handle the file storage, we use ActiveStorage, which allows us to upload files to a cloud service like AWS S3, Google Cloud Storage, or Microsoft Azure Storage. But it can also handle local disk storage for development and testing. The configuration for ActiveStorage can be found in the config/storage.yml file.

That’s all there is to saving audio files in Rails. With this, we are able to store user-generated audio directly in our Rails application. In the next step, we will discuss how to retrieve and play back these audio files.

Conclusion

In conclusion, adding audio recording functionality to a Ruby on Rails application is straightforward and efficient with the right tools. RecordRTC and StimulusJs complement Rails in offering an interactive, user-friendly experience. The ability to record, store, and retrieve audio data opens up vast opportunities for creative app functionalities. Whether for messaging, user feedback, or multimedia uploads, the integration of audio data can take your app to the next level.

From: https://hamzawais54.medium.com/integrating-audio-recording-into-a-ruby-on-rails-app-using-recordrtc-and-stimulusjs-f713b1c77bd9

阅读量: 830
发布于:
修改于: