同步操作将从 OpenHarmony/docs 强制同步,此操作会覆盖自 Fork 仓库以来所做的任何修改,且无法恢复!!!
确定后同步将在后台操作,完成时将刷新页面,请耐心等待。
Camera is one of the services provided by the OpenHarmony multimedia subsystem. The camera module provides recording, preview, and photographing features and supports concurrent stream reading by multiple users.
It is considered good practice that you understand the following concepts before starting development:
Video frame
A video frame is formed by the stream data of a video image. Video data streams are formed by a series of image data arranged at a fixed time interval.
Frame per second (FPS)
FPS is used to represent the frame rate at which images are refreshed during video playback, or the number of frames per second during video playback. A higher frame rate means smoother video playback.
Resolution
Information about each image frame consists of pixels. The number of pixels in an image is presented by the resolution. For example, 1080p (1920 x 1080) indicates that the image width is 1920 pixels and the image height is 1080 pixels.
Multimedia services
Multimedia services are started by the Init process upon system startup, and media hardware resources (such as memory, display hardware, image sensors, and codecs) are initialized and allocated. During the initialization, the configuration file is parsed, which determines the upper limit of capabilities and resources of each service. Generally, the upper limit is configured by original equipment manufacturers (OEMs) in the configuration file. The following configuration items are available for the camera service during multimedia service initialization:
Major classes
You can use the Camera class and its asynchronous callback classes to configure and access the camera functionalities. The three callback classes correspond to different asynchronous processing scenarios, as described in Table 1.
Table 1 Class description
Configures the static camera capability through the configuration class to use basic camera functionalities. |
||
Stream transfer
A surface is the basic data structure for transferring audio and video data. A camera is generally used as the data producer of a surface and has specific consumers in different scenarios.
Camera preview and recording outputs are video streams, and photographing outputs are image frames. The outputs are transferred through the Surface class. A surface can transmit media information streams within and cross processes.
Take video recording as an example. You create a Recorder instance, obtain the surface of the Recorder instance, and then transfer the surface to the Camera instance. In this case, the Camera instance works as a producer to inject video streams to the surface, and the Recorder instance act as the consumer to obtain video streams from the surface for storage. In this case, you connect the recorder and camera through the surface.
Similarly, you can create a surface, implement consumer logic for it, and transfer it to the Camera instance. For example, transmit video streams over the network or save captured frame data as an image file.
The graphics module also obtains stream resources from the camera module through surfaces. For details, see development guidelines on Graphic.
Camera running process
Creating a camera
This process creates a Camera instance by CameraManager, binds the camera device to the server, and asynchronously notifies you of the successful creation. The following figure shows the time sequence between classes.
Taking a video/Previewing
This process creates a Camera instance via CameraKit, and configures frame attributes via FrameConfig for recording or previewing. The following figure shows the time sequence.
此处可能存在不合适展示的内容,页面不予展示。您可通过相关编辑功能自查并修改。
如您确认内容无涉及 不当用语 / 纯广告导流 / 暴力 / 低俗色情 / 侵权 / 盗版 / 虚假 / 无价值内容或违法国家有关法律法规的内容,可点击提交进行申诉,我们将尽快为您处理。