实时流媒体已成为共享信息并实时与观众联系的最受欢迎的方法之一。无论您是企业主,音乐家还是老师。实时流媒体可以是与观众共享您的信息的强大工具。
选择正确的实时流平台是最重要的决定,因为它可能会影响流的质量,受众群体的大小以及现场流的整体成功。选择实时流平台时需要考虑一些因素:
- 功能:寻找可提供创建高质量流所需的功能的实时流平台。例如,您可能需要一个支持多个相机角度的平台,允许您共享屏幕或允许多个广播公司。
- 集成:实时流媒体平台的集成应该简单快捷。
- 预算:尽管实时流媒体平台很昂贵,但选择适合您预算的平台很重要。
为什么选择视频SDK?
视频SDK是那些寻求实时流媒体平台的人的理想选择,该平台提供了创建高质量流的必要功能。该平台支持屏幕共享,实时消息传递,使广播公司可以邀请受众成员上台并支持100K+参与者,从而确保您的实时流互动和引人入胜。使用Video SDK,您还可以使用自己的自定义设计布局模板进行实时流媒体。
在集成方面,Video SDK提供了一个简单快速的集成过程,使您可以将实时流无缝集成到应用程序中。这样可以确保您可以在没有任何技术困难或冗长的实施过程的情况下享受现场流的好处。
此外,视频SDK是budget-friendly,使其成为各种规模的企业的负担得起的选择。您可以享受功能丰富的实时流媒体平台的好处,而不会破坏银行,这是初创企业和小型企业的理想选择。
6个步骤构建实时流动ReactAntiveâApp
以下步骤将为您提供所有信息,以快速构建Interactive Live Streaming应用程序。请仔细跟随,如果您有麻烦,请立即在Discord上知道,我们很乐意为您提供帮助。
先决条件
在进行继续之前,请确保您的开发环境符合以下要求:
- 视频SDK开发人员帐户(没有一个?关注视频SDK仪表板)
- 对反应的基本理解
- node.js v12+
- npm v6+(配备了较新的节点版本)
- Android Studio或Xcode已安装
应该有一个VideosDK帐户来生成令牌。访问VideosDK仪表板以生成令牌
应用架构
此应用将包含两个屏幕:
加入屏幕:此屏幕允许扬声器创建工作室或加入预定义的工作室和查看器加入预定义的工作室。
扬声器屏幕:此屏幕基本包含扬声器列表和一些工作室控件,例如启用 /禁用麦克风和相机,然后离开工作室。< / p>
创建应用程序
通过应用以下命令来创建新的反应新应用。
npx react-native init AppName
对于反应本机设置,您可以关注官方文档。
视频SDK安装
按照以下命令安装视频SDK。在运行此命令之前,请确保您应该在项目目录中。
对于NPM:
npm install "@videosdk.live/react-native-sdk"
yarnâ:
yarn add "@videosdk.live/react-native-sdk"
项目结构:
root
├── node_modules
├── android
├── ios
├── App.js
├── api.js
├── index.jsCopy
Android设置
步骤1:在AndroidManifest.xml文件中添加所需的权限。
<manifest
xmlns:android="http://schemas.android.com/apk/res/android"
package="com.cool.app"
>
<!-- Give all the required permissions to app -->
<uses-permission android:name="android.permission.INTERNET" />
<uses-permission android:name="android.permission.ACCESS_NETWORK_STATE" />
<!-- Needed to communicate with already-paired Bluetooth devices. (Legacy up to Android 11) -->
<uses-permission
android:name="android.permission.BLUETOOTH"
android:maxSdkVersion="30" />
<uses-permission
android:name="android.permission.BLUETOOTH_ADMIN"
android:maxSdkVersion="30" />
<!-- Needed to communicate with already-paired Bluetooth devices. (Android 12 upwards)-->
<uses-permission android:name="android.permission.BLUETOOTH_CONNECT" />
<uses-permission android:name="android.permission.CAMERA" />
<uses-permission android:name="android.permission.MODIFY_AUDIO_SETTINGS" />
<uses-permission android:name="android.permission.RECORD_AUDIO" />
<uses-permission android:name="android.permission.SYSTEM_ALERT_WINDOW" />
<uses-permission android:name="android.permission.FOREGROUND_SERVICE"/>
<uses-permission android:name="android.permission.WAKE_LOCK" />
<application>
<meta-data
android:name="live.videosdk.rnfgservice.notification_channel_name"
android:value="Meeting Notification"
/>
<meta-data
android:name="live.videosdk.rnfgservice.notification_channel_description"
android:value="Whenever meeting started notification will appear."
/>
<meta-data
android:name="live.videosdk.rnfgservice.notification_color"
android:resource="@color/red"
/>
<service android:name="live.videosdk.rnfgservice.ForegroundService" android:foregroundServiceType="mediaProjection"></service>
<service android:name="live.videosdk.rnfgservice.ForegroundServiceTask"></service>
</application>
</manifest>
步骤2:在Android/app/build.gradle文件中链接内部库依赖的夫妇链接
dependencies {
compile project(':rnfgservice')
compile project(':rnwebrtc')
compile project(':rnincallmanager')
}
在android/settings.gradle中包括依赖项
include ':rnwebrtc'
project(':rnwebrtc').projectDir = new File(rootProject.projectDir, '../node_modules/@videosdk.live/react-native-webrtc/android')
include ':rnincallmanager'
project(':rnincallmanager').projectDir = new File(rootProject.projectDir, '../node_modules/@videosdk.live/react-native-incallmanager/android')
include ':rnfgservice'
project(':rnfgservice').projectDir = new File(rootProject.projectDir, '../node_modules/@videosdk.live/react-native-foreground-service/android')
更新mainapplication.java使用incallmanager并运行一些前景服务。
import live.videosdk.rnfgservice.ForegroundServicePackage;
import live.videosdk.rnincallmanager.InCallManagerPackage;
import live.videosdk.rnwebrtc.WebRTCModulePackage;
public class MainApplication extends Application implements ReactApplication {
private static List<ReactPackage> getPackages() {
return Arrays.<ReactPackage>asList(
/* Initialise foreground service, incall manager and webrtc module */
new ForegroundServicePackage(),
new InCallManagerPackage(),
new WebRTCModulePackage(),
);
}
}
某些设备可能会面临WebRTC问题并解决此问题,请使用以下内容更新您的android/gradle.properties文件
/* This one fixes a weird WebRTC runtime problem on some devices. */
android.enableDexingArtifactTransform.desugaring=false
如果使用proguardâ,请在Android/App/Proguard-rules.pro文件(这是可选的)中进行以下显示的更改
-keep class org.webrtc.** { *; }
步骤3:更新带有一些新颜色的Colors.xml文件,用于内部依赖。
<resources>
<item name="red" type="color">#FC0303</item>
<integer-array name="androidcolors">
<item>@color/red</item>
</integer-array>
</resources>
iOS设置
步骤1:安装React-incallmanager
$ yarn add @videosdk.live/react-native-incallmanager
步骤2:确保您使用的是1.10或更高的Cocoapods。要更新Cocoapods,您可以再次安装宝石。
$[sudo] gem install cocoapods
步骤3:手动链接(如果不自动链接反应 - incall-manager)
- drag node_modules/@videosdk.live/reaeact-native-native-incall-manager/ios/rnincallmanager.xcodeproj下
- 选择 - >构建阶段 - >链接二进制与库
- 拖动图书馆/rnincallmanager.xcodeproj/products/librnincallmanager.a链接二进制文件与库
- 选择 - >在标题搜索路径中构建设置,添加$(srcroot)/../ node_modules/@videosdk.live/react-native-native-incall-incall-manager/ios/rnincallmanager
步骤4:更改反应元网络的路径
pod ‘react-native-webrtc’, :path => ‘../node_modules/@videosdk.live/react-native-webrtc’
步骤5:更改您的平台版本
- 您的Podfile的平台字段更改为11.0或更高,因为React-native-webrtc不支持iOS <11平台:ios,ios,11.0
步骤6:更新版本后,您必须安装吊舱
Pod install
步骤7:然后在link二进制中添加libreact-native-webrtc.a与库。在主要项目文件夹的目标中。
步骤8:现在将以下权限添加到info.plist(project folder/ios/projectName/info.plist):
<key>NSCameraUsageDescription</key>
<string>Camera permission description</string>
<key>NSMicrophoneUsageDescription</key>
<string>Microphone permission description</string>
注册服务
在root Index.js文件中注册视频SDK服务以进行初始化服务。
import { register } from '@videosdk.live/react-native-sdk';
import { AppRegistry } from 'react-native';
import { name as appName } from './app.json';
import App from './src/App.js';
// Register the service
register();
AppRegistry.registerComponent(appName, () => App);
现在开始编写您的代码
步骤1:开始使用API.JS
跳到其他任何东西之前,我们已经写了API来生成独特的会议ID。您将需要auth令牌,可以使用VideoSDK-RTC-API-Server-Sexamples或从视频SDK仪表板生成它来生成它。
// Auth token we will use to generate a meeting and connect to it
export const authToken = "<Generated-from-dashbaord>";
// API call to create meeting
export const createMeeting = async ({ token }) => {
const res = await fetch(`https://api.videosdk.live/v1/meetings`, {
method: "POST",
headers: {
authorization: `${token}`,
"Content-Type": "application/json",
},
body: JSON.stringify({ region: "sg001" }),
});
const { meetingId } = await res.json();
return meetingId;
};
步骤2:带有所有组件的线框app.js
要建立app.js的线框,我们将使用视频SDK挂钩和上下文提供商。 Video SDK为会议提供者,MeetingConsumer,Usemeeting和use -participant挂钩提供。让我们了解它们每个。
首先,我们将探索上下文提供商和消费者。当需要在不同的嵌套级别上许多组件可以访问某些数据时,主要使用上下文。
- 会议推广者:这是上下文提供商。它接受价值配置和令牌作为道具。提供商组件接受一个值支柱,要传递给该提供商后代的消费组件。一个提供商可以与许多消费者联系。提供者可以嵌套以覆盖树内的值。
- 会议活动:这是上下文消费者。每当提供商的价值道具变化时,所有作为提供商后代的消费者都会重新渲染。
- USEMETING:它正在与React Hook API开会供会。它包括与会议有关的所有信息,例如加入,离开,启用/禁用麦克风或网络摄像头等。
- 使用参与者:它是参与者钩API。 USE -PATTICANT HOWN负责处理与一个特定参与者有关的所有事件和道具,例如名称,WebCamstream,micstream等。
开会上下文有助于聆听参与者加入会议或更改麦克风或相机等时的所有更改。
让我们开始使用app.js中的几行代码
开始
import React, { useState, useMemo, useRef, useEffect } from "react";
import {
SafeAreaView,
TouchableOpacity,
Text,
TextInput,
View,
FlatList,
Clipboard,
} from "react-native";
import {
MeetingProvider,
useMeeting,
useParticipant,
MediaStream,
RTCView,
Constants,
} from "@videosdk.live/react-native-sdk";
import { createMeeting, authToken } from "./api";
// Responsible for either schedule new meeting or to join existing meeting as a host or as a viewer.
function JoinScreen({ getMeetingAndToken, setMode }) {
return null;
}
// Responsible for managing participant video stream
function ParticipantView(props) {
return null;
}
// Responsible for managing meeting controls such as toggle mic / webcam and leave
function Controls() {
return null;
}
// Responsible for Speaker side view, which contains Meeting Controls(toggle mic/webcam & leave) and Participant list
function SpeakerView() {
return null;
}
// Responsible for Viewer side view, which contains video player for streaming HLS and managing HLS state (HLS_STARTED, HLS_STOPPING, HLS_STARTING, etc.)
function ViewerView() {
return null;
}
// Responsible for managing two view (Speaker & Viewer) based on provided mode (`CONFERENCE` & `VIEWER`)
function Container(props) {
return null;
}
function App() {
const [meetingId, setMeetingId] = useState(null);
//State to handle the mode of the participant i.e. CONFERNCE or VIEWER
const [mode, setMode] = useState("CONFERENCE");
//Getting MeetingId from the API we created earlier
const getMeetingAndToken = async (id) => {
const meetingId =
id == null ? await createMeeting({ token: authToken }) : id;
setMeetingId(meetingId);
};
return authToken && meetingId ? (
<MeetingProvider
config={{
meetingId,
micEnabled: true,
webcamEnabled: true,
name: "C.V. Raman",
//These will be the mode of the participant CONFERENCE or VIEWER
mode: mode,
}}
token={authToken}
>
<Container />
</MeetingProvider>
) : (
<JoinScreen getMeetingAndToken={getMeetingAndToken} setMode={setMode} />
);
}
export default App;
步骤3:实现加入屏幕
加入屏幕将作为媒介来安排新会议或以主持人或观众的身份加入现有会议。
这些将有3个按钮:
- 加入主机:当单击此按钮时,该人将加入Entered MeetingID作为主机。
- 加入查看器:当单击此按钮时,该人将加入Entered MeetingID作为观看者。
- 创建StudioRoomâ:单击此按钮时,该人将加入新的会议作为主持人。
function JoinScreen({ getMeetingAndToken, setMode }) {
const [meetingVal, setMeetingVal] = useState("");
const JoinButton = ({ value, onPress }) => {
return (
<TouchableOpacity
style={{
backgroundColor: "#1178F8",
padding: 12,
marginVertical: 8,
borderRadius: 6,
}}
onPress={onPress}
>
<Text style={{ color: "white", alignSelf: "center", fontSize: 18 }}>
{value}
</Text>
</TouchableOpacity>
);
};
return (
<SafeAreaView
style={{
flex: 1,
backgroundColor: "black",
justifyContent: "center",
paddingHorizontal: 6 * 10,
}}
>
<TextInput
value={meetingVal}
onChangeText={setMeetingVal}
placeholder={"XXXX-XXXX-XXXX"}
placeholderTextColor={"grey"}
style={{
padding: 12,
borderWidth: 1,
borderColor: "white",
borderRadius: 6,
color: "white",
marginBottom: 16,
}}
/>
<JoinButton
onPress={() => {
getMeetingAndToken(meetingVal);
}}
value={"Join as Host"}
/>
<JoinButton
onPress={() => {
setMode("VIEWER");
getMeetingAndToken(meetingVal);
}}
value={"Join as Viewer"}
/>
<Text
style={{
alignSelf: "center",
fontSize: 22,
marginVertical: 16,
fontStyle: "italic",
color: "grey",
}}
>
---------- OR ----------
</Text>
<JoinButton
onPress={() => {
getMeetingAndToken();
}}
value={"Create Studio Room"}
/>
</SafeAreaView>
);
}
步骤4:实现容器组件
- 下一步是创建一个基于模式的容器,该容器将管理加入屏幕,speakerview和Viewerview组件。
- 我们将检查LocalParticipant的模式,如果我们的会议将向exepterview展示,我们将显示Viewerview。
function Container() {
const { join, changeWebcam, localParticipant } = useMeeting({
onError: (error) => {
console.log(error.message);
},
});
return (
<View style={{ flex: 1 }}>
{localParticipant?.mode == Constants.modes.CONFERENCE ? (
<SpeakerView />
) : localParticipant?.mode == Constants.modes.VIEWER ? (
<ViewerView />
) : (
<View
style={{
flex: 1,
justifyContent: "center",
alignItems: "center",
backgroundColor: "black",
}}
>
<Text style={{ fontSize: 20, color: "white" }}>
Press Join button to enter studio.
</Text>
<Button
btnStyle={{
marginTop: 8,
paddingHorizontal: 22,
padding: 12,
borderWidth: 1,
borderColor: "white",
borderRadius: 8,
}}
buttonText={"Join"}
onPress={() => {
join();
setTimeout(() => {
changeWebcam();
}, 300);
}}
/>
</View>
)}
</View>
);
}
// Common Component which will also be used in Controls Component
const Button = ({ onPress, buttonText, backgroundColor, btnStyle }) => {
return (
<TouchableOpacity
onPress={onPress}
style={{
...btnStyle,
backgroundColor: backgroundColor,
padding: 10,
borderRadius: 8,
}}
>
<Text style={{ color: "white", fontSize: 12 }}>{buttonText}</Text>
</TouchableOpacity>
);
};
步骤5:实施Speakerview
下一步是创建speakerview和控制组件以管理诸如加入,离开,静音和取消静音之类的功能。
- 我们将从Usemeeting Hook中获取所有参与者,并将其用于将其设置为会议的模式,因此仅在屏幕上显示扬声器。
function SpeakerView() {
// Get the Participant Map and meetingId
const { meetingId, participants } = useMeeting({});
// For getting speaker participant, we will filter out `CONFERENCE` mode participant
const speakers = useMemo(() => {
const speakerParticipants = [...participants.values()].filter(
(participant) => {
return participant.mode == Constants.modes.CONFERENCE;
}
);
return speakerParticipants;
}, [participants]);
return (
<SafeAreaView style={{ backgroundColor: "black", flex: 1 }}>
{/* Render Header for copy meetingId and leave meeting*/}
<HeaderView />
{/* Render Participant List */}
{speakers.length > 0 ? (
<FlatList
data={speakers}
renderItem={({ item }) => {
return <ParticipantView participantId={item.id} />;
}}
/>
) : null}
{/* Render Controls */}
<Controls />
</SafeAreaView>
);
}
function HeaderView() {
const { meetingId, leave } = useMeeting();
return (
<View
style={{
flexDirection: "row",
padding: 16,
justifyContent: "space-evenly",
alignItems: "center",
}}
>
<Text style={{ fontSize: 24, color: "white" }}>{meetingId}</Text>
<Button
btnStyle={{
borderWidth: 1,
borderColor: "white",
}}
onPress={() => {
Clipboard.setString(meetingId);
alert("MeetingId copied successfully");
}}
buttonText={"Copy MeetingId"}
backgroundColor={"transparent"}
/>
<Button
onPress={() => {
leave();
}}
buttonText={"Leave"}
backgroundColor={"#FF0000"}
/>
</View>
);
}
function Container(){
...
const mMeeting = useMeeting({
onMeetingJoined: () => {
// We will pin the local participant if he joins in CONFERENCE mode
if (mMeetingRef.current.localParticipant.mode == "CONFERENCE") {
mMeetingRef.current.localParticipant.pin();
}
},
...
});
// We will create a ref to meeting object so that when used inside the
// Callback functions, meeting state is maintained
const mMeetingRef = useRef(mMeeting);
useEffect(() => {
mMeetingRef.current = mMeeting;
}, [mMeeting]);
return <>...</>;
}
- 我们将创建参与者展示参与者媒体。为此,将使用Use -Particaint Hook中的Web摄像头播放参与者的媒体。
function ParticipantView({ participantId }) {
const { webcamStream, webcamOn } = useParticipant(participantId);
return webcamOn && webcamStream ? (
<RTCView
streamURL={new MediaStream([webcamStream.track]).toURL()}
objectFit={"cover"}
style={{
height: 300,
marginVertical: 8,
marginHorizontal: 8,
}}
/>
) : (
<View
style={{
backgroundColor: "grey",
height: 300,
justifyContent: "center",
alignItems: "center",
marginVertical: 8,
marginHorizontal: 8,
}}
>
<Text style={{ fontSize: 16 }}>NO MEDIA</Text>
</View>
);
}
- 我们将添加控件组件,该组件将允许扬声器切换媒体并启动 /停止HLS。
function Controls() {
const { toggleWebcam, toggleMic, startHls, stopHls, hlsState } = useMeeting(
{}
);
const _handleHLS = async () => {
if (!hlsState || hlsState === "HLS_STOPPED") {
startHls({
layout: {
type: "SPOTLIGHT",
priority: "PIN",
gridSize: 4,
},
theme: "DARK",
orientation: "portrait",
});
} else if (hlsState === "HLS_STARTED" || hlsState === "HLS_PLAYABLE") {
stopHls();
}
};
return (
<View
style={{
padding: 24,
flexDirection: "row",
justifyContent: "space-between",
}}
>
<Button
onPress={() => {
toggleWebcam();
}}
buttonText={"Toggle Webcam"}
backgroundColor={"#1178F8"}
/>
<Button
onPress={() => {
toggleMic();
}}
buttonText={"Toggle Mic"}
backgroundColor={"#1178F8"}
/>
{hlsState === "HLS_STARTED" ||
hlsState === "HLS_STOPPING" ||
hlsState === "HLS_STARTING" ||
hlsState === "HLS_PLAYABLE" ? (
<Button
onPress={() => {
_handleHLS();
}}
buttonText={
hlsState === "HLS_STARTED"
? `Live Starting`
: hlsState === "HLS_STOPPING"
? `Live Stopping`
: hlsState === "HLS_PLAYABLE"
? `Stop Live`
: `Loading...`
}
backgroundColor={"#FF5D5D"}
/>
) : (
<Button
onPress={() => {
_handleHLS();
}}
buttonText={`Go Live`}
backgroundColor={"#1178F8"}
/>
)}
</View>
);
}
步骤6:实现Viewerview
当主机(CONFERENCE
模式参与者)启动实时流媒体时,观看者将能够看到实时流媒体。
要实现播放器视图,我们将使用React-Native-Video。播放HLS流将很有帮助。
让我们首先添加此软件包。
NPM:
npm install react-native-video
yarnâ:
yarn add react-native-video
安装了React-Native-Video,我们将从UseMeeting Hook中获得HLSURLS和ISHLSPLAY,该挂钩将用于播放播放器中的HLS。
//Add imports
// imports react-native-video
import Video from "react-native-video";
function ViewerView({}) {
const { hlsState, hlsUrls } = useMeeting();
return (
<SafeAreaView style={{ flex: 1, backgroundColor: "black" }}>
{hlsState == "HLS_PLAYABLE" ? (
<>
{/* Render Header for copy meetingId and leave meeting*/}
<HeaderView />
{/* Render VideoPlayer that will play `downstreamUrl`*/}
<Video
controls={true}
source={{
uri: hlsUrls.downstreamUrl,
}}
resizeMode={"stretch"}
style={{
flex: 1,
backgroundColor: "black",
}}
onError={(e) => console.log("error", e)}
/>
</>
) : (
<SafeAreaView
style={{ flex: 1, justifyContent: "center", alignItems: "center" }}
>
<Text style={{ fontSize: 20, color: "white" }}>
HLS is not started yet or is stopped
</Text>
</SafeAreaView>
)}
</SafeAreaView>
);
}
立即运行您的代码
//for android
npx react-native run-android
//for ios
npx react-native run-ios
卡在任何地方?在GitHub上查看此示例代码
结论
这样,我们已经成功地使用了视频SDK构建了React Native Live流媒体应用程序。如果您想添加诸如聊天消息和屏幕共享之类的功能,则可以随时查看我们的documentation。如果您在实施方面面临任何困难,可以在我们的discord community上与我们联系。
资源
- React Interactive Live Streaming with Video SDK - YouTube
- Build a React Native Video Calling App with Video SDK
- Build a React Native Android Video Calling App with 📞 Callkeep using 🔥 Firebase and Video SDK
- React Native Group Video Calling App Tutorial - YouTube
- How to Build React Native IOS Video Call app using CallKeep using Firebase and Video SDK Part-2