Understanding the FCC’s New STIR/SHAKEN Rules: A Compliance Guide for Businesses
The post The New FCC STIR/SHAKEN Rules and Why They Matter for Your Business in 2025 appeared first on Sangoma Technologies.
Understanding the FCC’s New STIR/SHAKEN Rules: A Compliance Guide for Businesses
The post The New FCC STIR/SHAKEN Rules and Why They Matter for Your Business in 2025 appeared first on Sangoma Technologies.
The AV1 codec represents the cutting edge of video compression technology. Developed by the Alliance for Open Media (AOMedia), a consortium including tech giants like Google, Microsoft, Netflix, and Mozilla, AV1 is a loyalty-free video codec that was designed to meet the ever-growing demand for high-quality video streaming while minimizing bandwidth usage.
It offers up to 30-50% better compression compared to its predecessors like VP9, VP8 and H.264, enabling sharper visuals at lower bitrates. This makes it an ideal choice for bandwidth-constrained environments, such as mobile networks or rural areas with limited internet speeds. Despite its impressive capabilities, the adoption of AV1 has been slower than anticipated due to several factors. AV1 has higher computational demands as its advanced compression algorithms require significantly more processing power for encoding and decoding. Hardware acceleration for AV1 is still emerging and therefore use of AV1 can result in higher CPU/energy consumption and suboptimal performance on low end devices and those without hardware support.
The path to adding AV1 support in Jitsi was not straightforward. Before we could enable AV1, it was essential to integrate support for the various modes and complexities that the codec offers, both in the Jitsi Meet client and the Jitsi Video Bridge (JVB). Jitsi had to extend JVB’s capabilities to handle AV1 streams, including managing simulcast and SVC modes seamlessly for multi-user conferences. This groundwork laid the foundation for AV1’s eventual inclusion as the preferred codec in Jitsi deployments.
AV1’s RTP encapsulation is unusual, if not weird, compared to RTP payloads for other video codecs – all the information an RTP Selective Forwarding Unit (SFU) like the JVB needs is carried in a “Dependency Descriptor” RTP header extension, rather than in the RTP payload proper. This means that the JVB doesn’t technically need to support the AV1 codec at all – it only needs to support the dependency descriptor header extension.
This format is unusual in that it was developed not in the IETF, where RTP payload formats are normally defined, but rather by AOMedia itself. The main consequence of this is that the header is encoded very much like a video codec: it’s very parsimonious of bit usage, at the cost of being both annoyingly complicated to parse, and being stateful – information needed to parse the header is sent only at intervals. For more information about this complexity, see Lorenzo’s post from several years ago – https://www.meetecho.com/blog/av1-svc/.
Handling the complex parsing is relatively straightforward once some utility classes are written. However, handling the statefulness is harder, especially given that an SFU always needs to be prepared for packet loss and reordering, so packets that use some state may arrive before the packet that provides it. Thus this work needs to keep track of the state that’s reported in those packets that carry it, and pass it forward to the parser to parse subsequent headers, while handling the possibility that a state update may have been missed.
Because the AV1 DD is a header extension, it can be applied to codecs other than AV1 itself. Notably, this allows us to indicate temporal scalability of an H.264 stream, which is useful because H.264 (without its little-implemented SVC extensions) has no way to indicate temporal scalability of its packets. As a result, the work to support the AV1 DD also allows Jitsi to enable scalable H.264 streams for JVB-mediated conferences as well! (Though there are currently some bugs in Chrome that make this less efficient than it could be – see here and here.) In fact the header can be applied to any video codec, though we still prefer to handle VP8 and VP9 using their in-payload information when that information is available.
Another notable feature of the AV1 dependency descriptor is its concept of “decode targets”, which are a specific layer that a decoder can decode, and thus that an SFU can forward. These usually correspond to a specific spatial and temporal layer, but technically they do not have to. The idea is that a decoder can choose among the various decode targets present in the stream. In most cases it would want to choose the highest-quality one available, but in some circumstances (for instance if it was CPU-constrained, or displaying a source in a small window) it could choose a lower-quality target instead.
This has the consequence that a stream needs to be explicit about what decode targets are actually present. An SFU, by design, forwards only some of the layers in a stream; this is what it means to “selectively forward”. As a result, the ultimate decoder needs to know what layers it can actually expect to receive, vs. which ones it won’t receive, or it can end up waiting forever while it never gets the frames it thinks it should render. To handle this case, the AV1 dependency descriptor contains a decode target bitmask in the AV1 header extension to indicate which layers are still present in the stream. This bitmask then needs to be updated every time the SFU changes its layer forwarding decision, so the decoder doesn’t try to wait to decode a decode target that won’t be arriving any more, or, conversely, so it can know to start decoding a new layer that’s newly started arriving. Fortunately, the logic to do this work is not too complicated, and is similar in complexity to the logic needed to modify the in-payload information for forward VP8 or VP9 streams.
With the release of Chromium M111, significant advancements were made in WebRTC, particularly with the introduction of Scalable Video Coding (SVC) support. This update enabled WebRTC applications to configure encoding parameters for SVC by extending the RTCRtpEncodingParameters dictionary. Around the same time, Chrome also introduced AV1 and VP9 simulcast support. Before Chromium M111, the K-SVC mode (Keyframe-scalable video coding) was the only supported mode for SVC. This update allowed Jitsi Meet to experiment with various scalability modes for AV1 and VP9.
In a Jitsi conference, the client and JVB work in tandem to ensure efficient video streaming. This involves an ongoing exchange of sender and receiver video constraints.
This dynamic coordination minimizes bandwidth usage and optimizes network resources while maintaining the quality of the user experience. Once the client receives sender constraints, it configures its outbound video streams using RTCRtpEncodingParameters. These parameters are tailored based on:
For AV1 and VP9, three operating modes were tested:
SVC allows a single video stream to be encoded in layers, each layer adding more detail or resolution to the base stream while simulcast involves sending multiple independent video streams of the same content, each at a different resolution and bitrate.
Aspect | Simulcast | SVC |
Encoding |
Multiple streams encoded separately. |
One stream encoded with multiple layers. |
Bandwidth Usage (Sender) |
Higher (multiple streams). |
Lower (single stream). |
CPU Usage (Sender) |
High (due to multiple encodings). |
Lower (single encoding with layers). |
CPU Usage (Receiver) |
Lower (no need to decode layers). |
Higher (decoding layered streams). |
Adaptability | Coarser (switching between streams). |
Finer (dynamic layer adjustment). |
Compatibility |
Broadly supported in WebRTC platforms. |
Limited support, requires advanced codecs. |
After extensive performance testing and careful evaluation of product requirements, Jitsi selected full SVC mode as the default configuration for both AV1 and VP9. This choice ensures optimal scalability and video quality across Jitsi’s deployments. However, this behavior is not rigid; it is configurable and can be easily overridden through config.js settings, providing flexibility to adapt to specific use cases or deployment needs.
To determine the optimal video codec for use in the Jitsi Meet client, Jitsi conducted comprehensive testing under realistic conditions to ensure that codec selection would meet product needs for quality, performance, and scalability. Below is an overview of the methodologies and considerations involved:
By rigorously analyzing these metrics, Jitsi optimized its codec selection strategy. While Full SVC modes remained the default for high-performance scenarios, fallback options like VP9 and VP8 were configured for legacy or resource-constrained devices. This comprehensive approach ensures that the Jitsi Meet client provides the best possible video experience across a wide range of devices and network conditions.
With the integration of AV1 into Jitsi Meet, users benefit from superior compression and high-quality video at lower bitrates. However, these advantages come at the cost of increased computational demands, especially on low-end devices. To address this, Jitsi introduced a three-fold adaptive quality control mechanism, ensuring a seamless experience even under CPU constraints.
This adaptive approach enables Jitsi Meet to leverage the advanced capabilities of AV1 while ensuring that users with diverse hardware configurations can participate in meetings without disruptions caused by excessive CPU usage.
However, when the CPU spike originates from an external process rather than the Jitsi Meet client, the adaptive mode ensures that quality degradation is minimal. To enhance the user experience, Jitsi Meet also incorporates a recovery mechanism that restores the video configuration once the external constraints are resolved.
This gradual approach minimizes the risk of overloading the system during recovery. It also adapts to fluctuating CPU availability, maintaining a balance between performance and quality. The client handles this entire process dynamically without any user interaction, providing a seamless experience.
Firefox and Safari do not advertise support for the AV1 codec yet. As a result, when users on these browsers join a call, all other participants automatically switch to the next codec in the preferred list, ensuring compatibility across all endpoints.
Additionally, while Chromium-based mobile endpoints are capable of both encoding and decoding AV1, Jitsi has opted to use AV1 only for decoding. For encoding, a lower-complexity codec is used, as encoding typically imposes a higher CPU load compared to decoding. This decision balances performance and device resource constraints, especially on mobile devices.
We have great news for you then!! AV1 support was introduced to Jitsi in June 2024 and has been available in our stable packages ever since. Initially, AV1 had to be manually configured as the preferred codec through config.js settings, allowing users to opt in.
Building on this, AV1 was soon made the preferred codec on meet.jit.si, marking a significant step in leveraging its advanced compression capabilities. Starting with release stable-9909, AV1 became the default preferred codec in our Docker deployments, ensuring out-of-the-box support for users opting for containerized setups.
After thorough experimentation and analysis of real-world performance data, we’re excited to share that AV1 will very soon become the default preferred codec in all deployments, bringing its exceptional bandwidth efficiency and video quality to a broader audience. Stay tuned!
Your personal meetings team.
P.S – With contributions from Jonathan Lennox (Jitsi VideoBridge)
The post AV1 and more … how does Jitsi Meet pick video codecs? appeared first on Jitsi.
Sangoma TeamHub—a user-friendly app designed to simplify team collaboration and boost productivity—is now available to new users of Sangoma's UCaaS platform CommUnity.
The post Introducing Sangoma TeamHub for CommUnity appeared first on Sangoma Technologies.
A portion of the telephone network that dates back several decades is often called Plain Old Telephone Service, or POTS for short. In recent years.
The post How to Successfully Migrate from Traditional Telecom to Modern Packet-Based Phone Networks appeared first on Sangoma Technologies.
Sangoma TeamHub—a user-friendly app designed to simplify team collaboration and boost productivity—is now available to new users of Sangoma’s UCaaS platform CommUnity.
With features like voice calling, real-time chat, SMS, video meetings, and file storage, you get all your communication tools in one place without jumping between multiple apps. Stay connected wherever you are! Access Sangoma TeamHub easily through your web browser or use the desktop app on Mac or Windows. It’s also available on iOS and Android mobile devices.
Key Features:
Talk: Make clear calls using Sangoma TeamHub’s built-in softphone, which features call transfers and rings across all your devices. Enjoy convenient access to visual voicemail and the ability to seamlessly hand off calls, allowing you to continue conversations on another device. Fine-tune your audio settings to fully leverage CommUnity’s powerful voice features, including parking lots, advanced ‘Find Me/Follow Me’ routes, and your own conference bridge.
Sangoma TeamHub focuses on user satisfaction by providing robust and secure communications, as well as easy customization. Accessible from any internet-connected device, it offers timely notifications whether you’re in the office, working remotely, or on the go, keeping you connected and informed.
Sangoma TeamHub is the ultimate tool for team collaboration and communication, helping you work efficiently from anywhere.
Learn more about mastering teamwork with Sangoma TeamHub today!
Join us on December 12th for the exclusive TeamHub for CommUnity webinar: Register Now!
The post Introducing Sangoma TeamHub for CommUnity appeared first on Sangoma Technologies.
As businesses increasingly migrate to IP networks, a significant challenge remains: how do you transition from legacy POTS (Plain Old Telephone Service) while maintaining the investment in devices like phones, fax machines, and security equipment?
The post Seamlessly Transition from POTS to IP Networks with Sangoma Vega Gateways appeared first on Sangoma Technologies.
In a significant shift in the Unified Communications (UC) landscape, NEC announced its decision to exit the UCaaS market, in addition to its earlier announcement to leave the on-prem market. These moves have left many NEC partners and prospects wondering, “Where do we go from here?” NEC in the News Intermedia Cloud Communications and NEC […]
The post NEC Steps Out of the UCaaS Market, Now What? appeared first on Sangoma Technologies.
As part of our commitment to innovation and customer support, we are excited to announce the launch of @ASKSangoma, an AI-powered Knowledge Bot. The latest version 3.3 of the Sangoma Teamhub collaboration app introduces a new bot designed to assist our partners and customers with inquiries regarding Sangoma’s product portfolio. Trained on comprehensive product documentation, […]
The post Introducing ASK Sangoma: Your AI-powered Knowledge Bot for Quicker Support appeared first on Sangoma Technologies.
ClueCon has just finished. Here are some of my photos.
Seven Du from China won the Macbook. Congratulations.
In the evolving landscape of virtual meetings, seamless connectivity remains paramount. SIP integration enables participants to join meetings from various devices, including hardware phones or softphones such as Bria, video conferencing systems such as Zoom, and traditional telephony systems. This broadens the scope of participants who can connect to Jitsi conferences, making it more inclusive.
Note: We are using JaaS here for the purpose of simplicity, but all of this can be deployed using the Open Source components available on GitHub.
Note: Replace pinCode with the specific conference PIN provided for your Jitsi conference
SIP audio-only connectivity provides a cost-effective and reliable way for participants to join Jitsi conferences. It reduces bandwidth consumption and costs, making it ideal in scenarios where video isn’t really needed, such as webinars. This option ensures users with limited internet access or slower connections can participate without interruptions.
Integrating Voximplant with Jitsi Meet involves several key steps:
Note: Voximplant can be replaced with a programmable SIP server such as Kamailio or OpenSIPS.
While SIP is these days referred to as legacy, it remains the most used protocol for VoIP and acts as the common denominator across many vendors in the industry. Thus it’s a great candiate for connecting anything to everything
Your personal meetings team.
The post Connecting anything to everything via SIP appeared first on Jitsi.
The Risks of Sticking with Unsupported On-Premises PBX Systems and Why Switching to Sangoma’s Switchvox is Essential Now You’ve found yourself in a tough spot – change is not just coming; it’s here. NEC has announced its departure from the on-premises PBX market, shifting its focus towards cloud-based solutions. So where does that leave you? […]
The post Delivering a One-two Punch to NEC’s Departure from the On-premises PBX Market appeared first on Sangoma Technologies.
The Risks of Sticking with Unsupported On-Premises PBX Systems and Why Switching to Sangoma’s Switchvox is Essential Now You’ve found yourself in a tough spot – change is not just coming; it’s here. NEC has announced its departure from the on-premises PBX market, shifting its focus towards cloud-based solutions. So where does that leave you? […]
The post Delivering a One-two Punch to NEC’s Departure from the On-premises PBX Market appeared first on Sangoma Technologies.
As of 2023, a significant portion of businesses still rely on on-premises systems. While the specific percentage of businesses using on-premises UC solutions is not detailed in recent reports, it’s evident that both on-premises and cloud solutions continue to coexist, with each offering distinct advantages depending on organizational needs and priorities. The unified communications market […]
The post Why Choose On-Premises Solutions In a Cloud-Dominated Era? appeared first on Sangoma Technologies.
As of 2023, a significant portion of businesses still rely on on-premises systems. While the specific percentage of businesses using on-premises UC solutions is not detailed in recent reports, it’s evident that both on-premises and cloud solutions continue to coexist, with each offering distinct advantages depending on organizational needs and priorities. The unified communications market […]
The post Why Choose On-Premises Solutions In a Cloud-Dominated Era? appeared first on Sangoma Technologies.
In the last stable release, Jitsi enabled a new feature called SSRC rewriting that improves the system performance for very large calls. This feature helps reduce the overall load on the system by reducing the number of signaling messages that get exchanged during a large call involving hundreds of endpoints. It also reduces the load on the local endpoint drastically by restricting the number of audio and video decoders created by the WebRTC engine thereby offering a better user experience for large calls.
When this feature is enabled, only a fixed set (let’s say up to 50) of SSRCs are signaled to the downstream endpoints in the call irrespective of the call size. An SSRC is nothing but a unique ID used for identifying a stream of RTP packets that belong to an audio or a video source. When additional media sources are requested by the receiver, the Jitsi Videobridge (JVB) overwrites the SSRCs of the newly requested media streams with that of the ones that were already signaled to the client before and are no longer needed. Therefore, no more than 50 SSRCs need to be signaled to the endpoints even if the number of media sources that will be routed in total far exceeds the set limit.
Moving on to the why – what is the problem that we were trying to solve by implementing SSRC rewriting?
The challenges that we faced when adding support for very large calls with respect to source signaling with the existing approach of signaling every new source to every other participant in the call were two fold.
At the client level, the number of m-lines in the SDP grew linearly with every remote source that got added to the call irrespective of whether media for that particular source gets routed to the endpoint or not. When a new m-line with SSRC is added to the remote SDP, the libwebrtc engine creates a transceiver and does all the plumbing necessary to decode a media stream with the given SSRC if and when it starts receiving it from the JVB. This tied up resources on the local endpoint unnecessarily and introduced delays in renegotiation cycles which resulted in unpleasant user experience. The client can also hit transceiver and SDP parser limits imposed by the browser resulting in unexpected behaviors. These performance issues are more pronounced on mobile endpoints which have fewer resources to begin with compared to the endpoints running on desktops.
On the backend, as the number of participants grew, so did the number of audio and video sources that needed to be signaled to every other participant in the call. This made the signaling traffic from prosody (XMPP communication server) to the endpoints grow quadratically. This was a problem, because Prosody, which is single-threaded, was already the bottleneck when scaling calls. Previously we had to introduce artificial delays in signaling in order to reduce the load. This caused long delays for the media to be established across participants when they unmuted their audio or video for the first time and was very disruptive to large meetings.
The solution to both of these problems was to switch to a demand based signaling mechanism where only a limited number of remote audio and video tracks are signaled to the endpoints depending on what is being needed or requested by them in real time instead of signaling all the known media sources in the call as and when they get added to the call.
Jitsi Videobridge (JVB) uses a slightly different approach for audio and for video when determining what sources to forward. Forwarding decisions for audio are based simply on the “loudness” of the streams determined from the audio level RTP extension.
With SSRC rewriting, JVB uses a separate SSRC space for each receiver. It maintains a map from an SSRC number to the name of a source. Changes to the map are signaled to the receiver over the direct signaling channel (a WebRTC DataChannel over SCTP), using partial updates:
[modules/RTC/BridgeChannel.js] <e.onmessage>: Received AudioSourcesMap: [{"source":"fc63db0f-a0","owner":"fc63db0f","ssrc":2602882473},{"source":"449360a0-a0","owner":"449360a0","ssrc":1358697798}]
[modules/RTC/BridgeChannel.js] <e.onmessage>: Received VideoSourcesMap: [{"source":"e01f2103-v0","owner":"e01f2103","ssrc":3129389873,"rtx":3219602897,"videoType":"CAMERA"},{"source":"9ac8fef2-v0","owner":"9ac8fef2","ssrc":1542056973,"rtx":1571329554,"videoType":"CAMERA"},{"source":"ed6b60f5-v0","owner":"ed6b60f5","ssrc":550523896,"rtx":2808127984,"videoType":"CAMERA"}]
When a new stream needs to be forwarded, it is allocated an SSRC. Before the limit is reached, JVB simply generates a new SSRC number, and when the limit has been reached the oldest entry is reused. Let’s look at an example to make this more clear. Assume the limit is set to just 3, and the available sources are A, B, C, D, E. Initially the map is empty. When A starts sending packets, we allocate SSRC 101 to it and signal it to the receiver like this:
AudioSourcesMap: [{"source":"A","owner":"endpoint-A","ssrc":101}]
Similarly when B and C start to speak we allocate SSRCs 102 and 103 for them:
AudioSourcesMap: [{"source":"B","owner":"endpoint-B","ssrc":102}]
AudioSourcesMap: [{"source":"C","owner":"endpoint-C","ssrc":103}]
Now we have reached the limit of 3 SSRCs. When D starts to speak, we’ll find the source in the map that has been active least recently (let’s say that’s B) and re-use its SSRC for D. We’ll signal an update (“SSRC 102 now belongs to D”):
AudioSourcesMap: [{"source":"D","owner":"endpoint-D","ssrc":102}]
The scheme is identical for video, except the forwarding decisions are made in a different way. Receivers explicitly signal their preferences using video constraints. The source names and their mute status are published in presence when an endpoint signals its source information to the Jitsi Conference Focus (Jicofo) and therefore this information is already available with all the other endpoints in the call. Based on the current layout in the UI and the user’s preferences, the client sends the updated receiver video constraints over the bridge channel.
A bandwidth allocation algorithm in the JVB then decides which streams to forward to a particular receiver, based on its constraints and current network conditions:
[modules/RTC/BridgeChannel.js] <Fa.sendReceiverVideoConstraintsMessage>: Sending ReceiverVideoConstraints with {"constraints":{"ed6b60f5-v0":{"maxHeight":360},"e01f2103-v0":{"maxHeight":360},"9ac8fef2-v0":{"maxHeight":360}},"defaultConstraints":{"maxHeight":0},"lastN":-1,"onStageSources":[],"selectedSources":[]}
On receiving an update to one of the maps (audio or video), the client adds the signaled SSRCs to the remote description on the peerconnection. The browser then fires a track event for each of the SSRCs, the corresponding remote tracks are then added to the HTMLElements associated with the remote user.
So when the audio packets with this SSRC arrive, the browser starts decoding the media and plays it through the selected audio output device. If the SSRC is already in use (i.e. the limit on the bridge has been reached) the client updates the owner of the associated track so that it gets attached to the corresponding HTMLAudioElement and the audio switches over to the new speaker seamlessly.
The video track creation process is the same as that of the audio tracks as described above. The client application needs to update the track’s owner whenever there is an updated source map involving the SSRC that is assigned to the track and re-attach it to the corresponding HTMLVideoElement so that the correct video stream is rendered in the remote participant’s viewport.
But wait, re-using an SSRC like this is okay for audio because the streams simply get mixed before playback, but what about video, how do we avoid video content being rendered in the wrong viewport when signaling and media race? That’s the elegance of this approach, we simply use a large enough limit (larger than the maximum number of streams forwarded at any one time) and the occurrence becomes extremely unlikely. If the limit is larger by K, then K new forwarding decisions must be made before the signaling arrives at the receiver for the problem to happen.
So how do we choose the limits? We have the constraint just mentioned, but also an interesting trade-off. If the limit is too high we’re using unnecessary resources at the receivers. But if the limit is too low, we’ll be signaling updates more often. We have chosen to set the limits to 50 by default. That’s 50 for audio and 50 for video, which is well above the maximum of 25 tiles that we display at any time.
When SSRC rewriting is enabled, the number of source signaling messages can increase drastically based on the SSRC limits set for the conference and the total number of participants in the call. Imagine a call with 100 participants where everyone has their video on; UI shows a grid of 25 participants and the SSRC limit is set to 25. Whenever the user scrolls to the next grid of 25 participants, existing SSRCs get remapped. This happens everytime the user scrolls back and forth. This results in a lot of signaling messages over the bridge channel. What if the websocket connection for the bridge channel is down at this time? This would result in videos not being rendered or audio from new dominant speakers not being heard which can be very disruptive to meetings. All the sources are signaled immediately after the websocket connection reforms but even minimal disruptions to audio can be very annoying.
To mitigate these issues, Jitsi client switches to using WebRTC’s SCTP data channel for establishing the bridge channel instead of using a websocket. This ensures that the bridge channel is up and running all the time as long as the media connection between the client and the JVB is up. This results in minimal or no disruptions to the signaling messages from the JVB to the downstream endpoints.
This feature has been well tested and has been running on meet.jit.si for the past few months now, with limits set to 50. We also enabled it by default in our last stable release of the Debian packages and Docker images. We will be releasing it soon to all our production deployments in the next few releases pending investigation into some SCTP crashes that we are seeing in the JVB .
Your personal meetings team.
The post Improving performance on very large calls: introducing SSRC rewriting appeared first on Jitsi.
In a digital age where the Unified Communications (UC) landscape is rapidly expanding, On-Premises UC solutions stand out as the bedrock for businesses aiming to secure, customize, and control their communication infrastructures. With the UC market projected to swell to USD 145.58 billion by 2024 and further grow to an impressive USD 496.30 billion by […]
The post Navigating the Future with Confidence: The Strategic Imperative of On-Premises UC Solutions appeared first on Sangoma Technologies.
In a digital age where the Unified Communications (UC) landscape is rapidly expanding, On-Premises UC solutions stand out as the bedrock for businesses aiming to secure, customize, and control their communication infrastructures. With the UC market projected to swell to USD 145.58 billion by 2024 and further grow to an impressive USD 496.30 billion by […]
The post Navigating the Future with Confidence: The Strategic Imperative of On-Premises UC Solutions appeared first on Sangoma Technologies.
The rumor mill started churning when cloud-based solutions began gaining prominence. These solutions offer scalable, flexible alternatives to traditional on-premises setups. As more companies migrated to the cloud, a narrative emerged suggesting that on-premises solutions like Switchvox were becoming obsolete. Though popular, this view only captures part of the picture. The Reality of Switchvox On-Premises […]
The post Switchvox On-Premises: Alive & Well appeared first on Sangoma Technologies.
The rumor mill started churning when cloud-based solutions began gaining prominence. These solutions offer scalable, flexible alternatives to traditional on-premises setups. As more companies migrated to the cloud, a narrative emerged suggesting that on-premises solutions like Switchvox were becoming obsolete. Though popular, this view only captures part of the picture. The Reality of Switchvox On-Premises […]
The post Switchvox On-Premises: Alive & Well appeared first on Sangoma Technologies.
So you want to have live meetings in Moodle courses. Well, as it turns out, this is quite an easy feat. Thanks to the wonderful work done by UDIMA (Universidad a Distancia de Madrid), you can use their Jitsi Moodle Plugin.
If your use-case doesn’t go beyond 25 monthly individual endpoints, you might want to opt for the JaaS Dev offering which is completely free. For users requiring more than 25 monthly endpoints or desiring premium features like transcriptions, dial-in, recordings or RTMP streaming, there are two options:
1. Add a credit card for overages, paying extra costs as needed, without any discounts.
2. Sign up for alternative plans offering an initial discount of 80% off for the first three months of JaaS usage.
To benefit of the 80% discount you need to use the MOODLE23 JaaS Coupon. The coupon expires on September 2024.
Before starting the configuration process, you need to download and install the latest Jitsi Moodle Plugin in your Moodle instance and create a JaaS account.
Right off the bat, the plugin tries to use the freely accessible meet.jit.si instance as the backend. This is only going to work for the first five minutes due to the changes announced here. This should be more than enough if you only want to give it a try.
Once you have created your JaaS Account, here are the steps to configure the plugin:
Go to the JaaS API Keys section and create a new key pair. Name it something meaningful. Download the private key and store it somewhere safe.
Open the Moodle Jitsi plugin settings and change the values as follows:
– Domain: `8×8.vc`
– Server type: pick `8×8 Servers`
– App_ID: copy it from the JaaS Console API Keys page, i.e. `vpaas-magic-cookie-xxxxx`
– Api Key ID: copy it from the keys table in the same page, it should be something like `vpaas-magic-cookie-xxxxx/somehex`
– Private key: the contents of the private key you just downloaded from JaaS Console
– Make sure to leave the ID User (jitsi_id) dropdown to Username, the default
With just a few steps, you’ll now have a complete communication solution right within Moodle!
Your personal meetings team.
The post Jitsi + Moodle, with a dash of JaaS appeared first on Jitsi.
There was a time when cash was king. Today, however, not so much. Paying over the phone for everything from fast food to a holiday abroad has become the norm for millions of consumers who prioritise modern convenience over traditional transacting. Protection from fraud is, of course, paramount, which is why strict rules exist. Phones […]
The post PCI Compliance: How the Sangoma P-Series Business Phone System Can Keep You Out of Jail appeared first on Sangoma Technologies.
There was a time when cash was king. Today, however, not so much.
Paying over the phone for everything from fast food to a holiday abroad has become the norm for millions of consumers who prioritise modern convenience over traditional transacting.
Protection from fraud is, of course, paramount, which is why strict rules exist. Phones used by businesses to take payments must be Payment Card Industry-compliant. They must feature ‘pause-and-resume’ functionality for use at the critical point that customers provide their payment card details. Also, all transactional calls and call recordings must be encrypted.
Breaching these rules can lead to seriously big fines capable of dealing a catastrophic blow to profit and reputation. Ultimately, in the most extreme cases, non-compliance can even lead to jail!
Next year, the rules change: an update to the Payment Card Industry Data Security Standard (PCI DDS) which beefs-up protection, but which also places new obligations on in-scope organisations. In a world in which technology plays a vital part in the way over-the-phone transactions occur, it is the perfect time for businesses and their Managed Service Providers to also change things up – upgrading desk and softphones that support that all-important compliance.
Choose Sangoma’s award-nominated P-Series Desk and Softphone, and that-all important compliance also comes with a feature set that ensures the wider desk calling experience is rich and rewarding for both callers and agents. High-definition audio, unprecedented plug-and-play deployment, and advanced IP applications as standard that include voicemail, call log, contacts, phone status, user presence, call parking, and more. Big and bold, yet at the same time stylish and sleek, its ergonomic design leverages the beauty of angles, curves and corners to provide users with brilliant visibility of and access to its bright touchscreen display and lightweight receiver, regardless of where they sit or stand. The audio is crisp and clear, the management and compliance of calls is fast and effective, and overall satisfaction levels are high. For callers, that overall high-quality reflects favourably on the brands or organisations with which they interact.
Importantly, the P-Series’ versatile compatibility with both PBX and cloud-based voice systems also means it can be deployed on different platforms in different spaces whilst maintaining an organisation’s look-and-feel. In addition, the P-Series is the only Desk Phone designed to be fully-compatible with Sangoma’s wider communications portfolio – further enhancing its value to Sangoma customers and enabling its commercial reseller partners to benefit from the efficiency and simplicity of stocking and supporting just one Sangoma phone product line. The P-Series brings yet more added value too. The rarest of combinations: it appeals to all types of user groups, at a price point that suits all types of budgets.
For user organisations and their MSPs, we think that makes for a compelling offer all-round.
The post PCI Compliance: How the Sangoma P-Series Business Phone System Can Keep You Out of Jail appeared first on Sangoma Technologies.
We’re happy to announce that Jitsi will be participating in Google Summer of Code 2024!
We have some very cool project ideas in the list for this year, and we’re still open to discussing new ones.
You can also check out the official program website, the list of accepted organizations and the full program timeline.
The next important date is March 18 when the contributor application period opens. In the meantime, please join me in welcoming the new contributors to our community!
The post Google Summer of Code 2024 appeared first on Jitsi.
In today’s healthcare landscape, the patient experience hinges on seamless communication. This is where Sangoma Technologies makes an indelible mark. With their unified communications as a service (UCaaS) portfolio, end-to-end solutions are crafted that enhance patient care, streamline processes, and reduce costs.
Sangoma CX stands as a testament to this commitment, magnifying call-handling capabilities and more than doubling call volumes for healthcare providers, as evidenced by the transformation of the Healthcare Physicians Group. The robust reporting suite of Sangoma CX, coupled with its callback features, optimizes patient touchpoints, ensuring no call – and no patient – is left unanswered.
Furthermore, Sangoma’s integrated suite of value-based Communications as a Service solutions allows care organizations of all sizes to maximize productivity. This means healthcare institutions can focus on what they do best – providing top-notch patient care – while Sangoma takes care of the rest.
With the help of Sangoma Apps, outreach initiatives are taken to new heights. From online scheduling to automated reminders, every interaction is personalized, enhancing the patient experience at every turn. Video conferencing is a must for telemedicine, and Sangoma delivers a consistent and feature-rich experience for virtual consultations, becoming a partner to deliver exceptional patient care.
The results speak for themselves. Hospitals, pharmacies, and labs report improved patient care and significant time savings in emergency handling – up to 50% in some cases. The future of healthcare communication is here, and it’s powered by Sangoma.
Learn more about our solutions in the Sangoma Healthcare eBook.
The post Powering Patient Touchpoints: Sangoma’s Cure for Communication Roadblocks appeared first on Sangoma Technologies.
In today’s healthcare landscape, the patient experience hinges on seamless communication. This is where Sangoma Technologies makes an indelible mark. With their unified communications as a service (UCaaS) portfolio, end-to-end solutions are crafted that enhance patient care, streamline processes, and reduce costs.
Sangoma CX stands as a testament to this commitment, magnifying call-handling capabilities and more than doubling call volumes for healthcare providers, as evidenced by the transformation of the Healthcare Physicians Group. The robust reporting suite of Sangoma CX, coupled with its callback features, optimizes patient touchpoints, ensuring no call – and no patient – is left unanswered.
Furthermore, Sangoma’s integrated suite of value-based Communications as a Service solutions allows care organizations of all sizes to maximize productivity. This means healthcare institutions can focus on what they do best – providing top-notch patient care – while Sangoma takes care of the rest.
With the help of Sangoma Apps, outreach initiatives are taken to new heights. From online scheduling to automated reminders, every interaction is personalized, enhancing the patient experience at every turn. Video conferencing is a must for telemedicine, and Sangoma delivers a consistent and feature-rich experience for virtual consultations, becoming a partner to deliver exceptional patient care.
The results speak for themselves. Hospitals, pharmacies, and labs report improved patient care and significant time savings in emergency handling – up to 50% in some cases. The future of healthcare communication is here, and it’s powered by Sangoma.
Learn more about our solutions in the Sangoma Healthcare eBook.
The post Powering Patient Touchpoints: Sangoma’s Cure for Communication Roadblocks appeared first on Sangoma Technologies.
Building on the new, powerful AI features introduced to Sangoma CX already in 2024, our cloud contact center solution now expands its automation capabilities and adds other significant enhancements with our 7.5 release. The new version is even more capable, empowering agents, supervisors, and administrators in their day-to-day operations.
Release 7.5 also brings several other minor improvements, like the capability to export the Queue Annual report to a spreadsheet format, and a new simplified implementation for logging agent call flow for phone-only Agents, in simple call center setups. These improvements enhance the overall functionality and user experience for contact center administrators.
If you have any questions, need a quote, or require assistance with these new features, please reach out to us.
Stay tuned for updates on upcoming enhancements and exciting new features in Sangoma CX!
Sincerely,
The Sangoma Product Team
The post More AI Features for Your Contact Center With Sangoma CX 7.5 appeared first on Sangoma Technologies.
Building on the new, powerful AI features introduced to Sangoma CX already in 2024, our cloud contact center solution now expands its automation capabilities and adds other significant enhancements with our 7.5 release. The new version is even more capable, empowering agents, supervisors, and administrators in their day-to-day operations.
Release 7.5 also brings several other minor improvements, like the capability to export the Queue Annual report to a spreadsheet format, and a new simplified implementation for logging agent call flow for phone-only Agents, in simple call center setups. These improvements enhance the overall functionality and user experience for contact center administrators.
If you have any questions, need a quote, or require assistance with these new features, please reach out to us.
Stay tuned for updates on upcoming enhancements and exciting new features in Sangoma CX!
Sincerely,
The Sangoma Product Team
The post More AI Features for Your Contact Center With Sangoma CX 7.5 appeared first on Sangoma Technologies.
It’s hard to believe that Asterisk has been around for 25 years. It started in humble beginnings as a phone system for a fledgling company, only to grow and expand to become the purpose of that company itself eventually. While I was not there from the beginning, I was there from the fairly early days.
Back then, it was the wild west. VoIP was still new. SIP was still new, and its problems had yet to be discovered. STIR/SHAKEN was in the minds of no one. The cost of calling people was still high (in comparison to what we pay today). Asterisk swiftly emerged onto the scene and became THE open source phone system. It disrupted the traditional phone system market and gave the power back to the users and deployers to do things the way they wanted at a price they could handle.
As the industry evolved and expanded, Asterisk continued to do so as well. Thanks to the numerous contributions and the community around the project, it expanded in functionality and scope beyond what anyone could have envisioned. Its usage grew and grew in more clever and interesting ways. Every week, by helping individuals and talking to them, I find new ways that Asterisk is used. The flexibility of Asterisk means you are always surprised by what people are doing.
This evolution and expansion continue to happen to this day. While Asterisk started as a phone system, it has also become a telephony toolkit. This has given users even more power and control, especially developers, allowing them to manifest ideas more rapidly in the communications space by providing easy-to-use and understandable interfaces. This is where a lot of the power in Asterisk is these days. This aspect is filled with endless possibilities and untapped potential.
I want to thank everyone who has helped Asterisk get to this point. Those who have contributed code or ideas, filed issues, helped others on forums and other places, and spread the word of Asterisk itself. It’s been a wild 25 years, and we still have more to go. It’s not too late to join me and others on the journey through participating in the Asterisk project and community. You can find us on GitHub or the Community Forums. All help is welcome!
As always, I enjoy sharing extra resources. If you’re intrigued by the history of Asterisk and other related topics, I invite you to check out the webinar I gave on the fascinating “Evolution of Asterisk”.
The post 25 Years of Asterisk appeared first on Sangoma Technologies.
It’s hard to believe that Asterisk has been around for 25 years. It started in humble beginnings as a phone system for a fledgling company, only to grow and expand to become the purpose of that company itself eventually. While I was not there from the beginning, I was there from the fairly early days.
Back then, it was the wild west. VoIP was still new. SIP was still new, and its problems had yet to be discovered. STIR/SHAKEN was in the minds of no one. The cost of calling people was still high (in comparison to what we pay today). Asterisk swiftly emerged onto the scene and became THE open source phone system. It disrupted the traditional phone system market and gave the power back to the users and deployers to do things the way they wanted at a price they could handle.
As the industry evolved and expanded, Asterisk continued to do so as well. Thanks to the numerous contributions and the community around the project, it expanded in functionality and scope beyond what anyone could have envisioned. Its usage grew and grew in more clever and interesting ways. Every week, by helping individuals and talking to them, I find new ways that Asterisk is used. The flexibility of Asterisk means you are always surprised by what people are doing.
This evolution and expansion continue to happen to this day. While Asterisk started as a phone system, it has also become a telephony toolkit. This has given users even more power and control, especially developers, allowing them to manifest ideas more rapidly in the communications space by providing easy-to-use and understandable interfaces. This is where a lot of the power in Asterisk is these days. This aspect is filled with endless possibilities and untapped potential.
I want to thank everyone who has helped Asterisk get to this point. Those who have contributed code or ideas, filed issues, helped others on forums and other places, and spread the word of Asterisk itself. It’s been a wild 25 years, and we still have more to go. It’s not too late to join me and others on the journey through participating in the Asterisk project and community. You can find us on GitHub or the Community Forums. All help is welcome!
As always, I enjoy sharing extra resources. If you’re intrigued by the history of Asterisk and other related topics, I invite you to check out the webinar I gave on the fascinating “Evolution of Asterisk”.
The post 25 Years of Asterisk appeared first on Sangoma Technologies.
The healthcare industry is characterized by a relentless pursuit for excellence. The quality of care service directly translates into a substantial impact in people’s lives. In the quest to deliver superior patient experiences, Sangoma solutions are at the forefront of this goal.
Connectivity, the lifeblood of seamless operations, is the top priority for the IT team. No more dropped calls or network reach loss. Instead, budget-friendly 5G broadband and satellite options can handle your internet connections, no matter when or where. Sangoma offers SD-WAN too, which boosts your network’s performance by tying together all your connections, making sure they’re always up and running, and offering top-notch security against threats. Make your backbone embody reliability and efficiency thanks to a fully-monitored network infrastructure that meets PCI compliance standards.
Sangoma’s UCaaS features elevate this connectivity to embrace diversity and dynamism. Video conferencing, instant messaging, mobile applications, and team collaboration tools ensure that communication is not just continuous, but also multifaceted and hybrid-work friendly.
The integrated contact center features of Sangoma serve as a catalyst for change. With true omnichannel features, you can deliver an exceptional customer experience and hear out your patients from their preferred method of speaking to you. Every interaction will help your patients feel valued and cared for.
HIPAA compliance is vital to preserve the sanctity of patient data. Trust in the safety of information is no longer a hope, but a guarantee.
Sangoma’s Managed Internet Services and Mobility Solutions offer superior communications at lower costs while expanding the talent pool. Secure internal collaboration, compliant documentation, cost-effective operations, and flexible solutions are no longer exceptions, but the norm.
Keep the pulse of your healthcare institution steady and healthy with Sangoma, your one-stop-shop for modern technology solutions. Learn more in our Healthcare eBook.
The post The Pulse Behind Modern Healthcare Communication appeared first on Sangoma Technologies.
The healthcare industry is characterized by a relentless pursuit for excellence. The quality of care service directly translates into a substantial impact in people’s lives. In the quest to deliver superior patient experiences, Sangoma solutions are at the forefront of this goal.
Connectivity, the lifeblood of seamless operations, is the top priority for the IT team. No more dropped calls or network reach loss. Instead, budget-friendly 5G broadband and satellite options can handle your internet connections, no matter when or where. Sangoma offers SD-WAN too, which boosts your network’s performance by tying together all your connections, making sure they’re always up and running, and offering top-notch security against threats. Make your backbone embody reliability and efficiency thanks to a fully-monitored network infrastructure that meets PCI compliance standards.
Sangoma’s UCaaS features elevate this connectivity to embrace diversity and dynamism. Video conferencing, instant messaging, mobile applications, and team collaboration tools ensure that communication is not just continuous, but also multifaceted and hybrid-work friendly.
The integrated contact center features of Sangoma serve as a catalyst for change. With true omnichannel features, you can deliver an exceptional customer experience and hear out your patients from their preferred method of speaking to you. Every interaction will help your patients feel valued and cared for.
HIPAA compliance is vital to preserve the sanctity of patient data. Trust in the safety of information is no longer a hope, but a guarantee.
Sangoma’s Managed Internet Services and Mobility Solutions offer superior communications at lower costs while expanding the talent pool. Secure internal collaboration, compliant documentation, cost-effective operations, and flexible solutions are no longer exceptions, but the norm.
Keep the pulse of your healthcare institution steady and healthy with Sangoma, your one-stop-shop for modern technology solutions. Learn more in our Healthcare eBook.
The post The Pulse Behind Modern Healthcare Communication appeared first on Sangoma Technologies.
We are thrilled to share the latest improvements to our Sangoma CX platform in its newly released version 7.4. Focusing on customer service and reporting capabilities enhancements, these updates will make your contact center experience more efficient, intelligent, and data-driven. Let’s dive in!
Omnichannel
Telegram interaction – Agent side
Telegram interaction – End-user side
AI
Reports
Automated Data Export
The Automated Data Export now includes detailed information on Agent Hold times, Blind Transfer times and destinations, and an extended Queue Details container with a “Max Wait Time” setting for each queue.
Agent
We are committed to continually improving your experience with Sangoma CX, and we know you will find these latest enhancements invaluable to your business operations.
For quotes, questions, or assistance with these new features, please reach out to us.
Thank you again for choosing Sangoma CX for your Contact Center needs!
Sincerely,
The Sangoma Product Team
Sangoma CX and Google Dialogflow Configuration Guide
Sangoma CX and WhatsApp Channel Configuration Guide
Sangoma CX and Telegram Channel Configuration Guide
Partners Only: CX 7.4 Release Kit
The post Exciting Updates for Your Contact Center with Sangoma CX 7.4 appeared first on Sangoma Technologies.
We are thrilled to share the latest improvements to our Sangoma CX platform in its newly released version 7.4. Focusing on customer service and reporting capabilities enhancements, these updates will make your contact center experience more efficient, intelligent, and data-driven. Let’s dive in!
Omnichannel
Telegram interaction – Agent side
Telegram interaction – End-user side
AI
Reports
Automated Data Export
The Automated Data Export now includes detailed information on Agent Hold times, Blind Transfer times and destinations, and an extended Queue Details container with a “Max Wait Time” setting for each queue.
Agent
We are committed to continually improving your experience with Sangoma CX, and we know you will find these latest enhancements invaluable to your business operations.
For quotes, questions, or assistance with these new features, please reach out to us.
Thank you again for choosing Sangoma CX for your Contact Center needs!
Sincerely,
The Sangoma Product Team
Sangoma CX and Google Dialogflow Configuration Guide
Sangoma CX and WhatsApp Channel Configuration Guide
Sangoma CX and Telegram Channel Configuration Guide
Partners Only: CX 7.4 Release Kit
The post Exciting Updates for Your Contact Center with Sangoma CX 7.4 appeared first on Sangoma Technologies.
With the holidays just around the corner we thought it would be a cool to show a perhaps non-conventional use of the Elgato StreamDeck, a gadget I recently acquired that would make a great gift!
The Elgato Stream Deck is a programmable hardware device that allows users to automate virtually any task with the press of a button. It has been around for a while but not too long ago I was at the RTC.ON conference chatting with my buddy Dan Jenkins when he told me there was a library to control these devices using WebHID. I instantly bought one (no, this is not a sponsored post).
The idea here is to use the Jitsi iframe API (you can start using it right away with a free JaaS account!) to map custom meeting controls on your own Stream Deck. Our iframe API provides a bunch of events and commands to interact with the meeting and the WebHID library allows us to program each key individually, inluding the icon on each of the buttons, which is actually a tiny display!
Here is a video demonstrating the integration:
Cool, right! The potential for contextual controls for specific applications right from your browser is virtually boundless, what a time to be working on the Web Platform.
The source code can be found here. It works with 6 buttons by default (the Stream Deck Mini) but it’s easy to adapt to other models.
Have fun and happy holidays!
Your personal meetings team.
The post Custom meeting controls with Elgato Stream Deck and WebHID appeared first on Jitsi.
Starting on August 24th, we will no longer support the anonymous creation of rooms on meet.jit.si, and will require the use of an account (we will be supporting Google, GitHub and Facebook for starters but may modify the list later on). This is a first for us, so users may encounter a few bumps here and there as we are tweaking the experience to make sure there is as little friction as possible on the way into a meeting.
When we started the service back in 2013, our goal was to offer a meeting experience with as little friction and as much privacy as possible. We felt and still feel that both of these goals are very important and one of the main reasons that justified the existence of “yet another meeting service.” We wanted people to be able to converse easily and freely, without fear of expressing their views and opinions.
Our “one tap and you’re in” experience was a big part of our strategy to eliminate friction. We didn’t want people to have to worry about “creating” meetings in advance, remembering passwords, codes or long complicated sequences of numbers for a meeting ID. We wanted users to be able to think of a name and just go there. Through the years we’ve had to compromise on this a little bit. We ended up introducing a pre-meeting device check screen. We felt that checking your camera and microphone before you entered a room could save everyone some hassle so it was worth the pause.
As for privacy, we previously made sure all communication was always encrypted and we retained no data beyond what is necessary to actually provide a decent meeting service.
Offering the possibility to anonymously use the service felt like a good way to help with both its privacy and the usability.
Our commitment to both goals remains as strong as ever but anonymity is no longer going to be one of the tools we use to achieve them.
Earlier this year we saw an increase in the number of reports we received about some people using our service in ways that we cannot tolerate. To be more clear, this was not about some people merely saying things that others disliked.
Over the past several months we tried multiple strategies in order to end the violations of our terms of service. However in the end, we determined that requiring authentication was a necessary step to continue operating meet.jit.si.
It is a good time to have a look at our privacy terms. 8×8 will now store the account responsible for creating rooms. Aside from the changes to our privacy terms referenced above, there is no other change to our meetings. We are still very much committed to holding user privacy in the highest regard and we still have no tools that would allow us to compromise the privacy of the actual audio or video content of a meeting, nor do we intend to create any.
That said, it is completely understandable that some users may feel uncomfortable using an account to access the service. For such cases we strongly recommend hosting your own deployment of Jitsi Meet. We spend a lot of effort to keep that a very simple process and this has always been the mode of use that gives people the highest degree of privacy.
If you see content that violates the jit.si terms of service you can always report it.
That’s all we’ve got for now!
The Jitsi Team
The post Authentication on meet.jit.si appeared first on Jitsi.
Flutter‘s initial release occurred in 2017, the same year as the introduction of our mobile apps and mobile SDKs. For those who are unfamiliar with it, Flutter is one of the most popular frameworks for developing cross-platform applications.
Now a few years after their first release, we are thrilled to announce that our mobile SDKs and Flutter cross paths as the Jitsi Meet Flutter SDK. Yes that’s right, after multiple requests, an official Jitsi Meet Plugin for Flutter is now available.
As of now, our family of mobile SDKs is more complete than ever.
Android and iOS are supported, of course. The plugin serves as a wrapper for the iOS and Android SDKs, on top of which a Flutter API was created with functionality similar to those found in native APIs.
The plugin is available on pub.dev under the jitsi_meet_flutter_sdk name. Discover it there, follow the instructions, and you’ll be able to utilize the API to the fullest extent.
Here is a sneak peek of how simple it is to add a meeting to a fresh page.
Here is how that looks like:
In your own Flutter app, you’ll have the same view as the one from the Jitsi Meet mobile apps, with just a few additional lines of code, amazing, right?
We developed two apps using the Jitsi Meet Flutter SDK, one of which is the example app in the plugin repository and primarily acts as a tester app by displaying the majority of the plugin’s features in the user interface, and the other of which is an official sample app in the repository that contains all of our samples for all mobile SDKs and is just a straightforward example of integrating Jitsi Meet.
Flutter is new to us, and we hope this new SDK will make it easier for our users and JaaS customers to embed video meetings into their existing Flutter apps we agerly await your feedback!
Your personal meetings team.
Author: Gabriel Borlea
The post Introducing the Jitsi Meet Flutter SDK appeared first on Jitsi.
Ever since we introduced our mobile apps to the world back in 2017 they have been backed by React Native.
Using React Native allowed us to reach feature parity quickly since all logic is shared between our web and mobile codebases, because they are not 2 different things, it’s a single codebase
Later that year, we released our native mobile SDKs to the world. These SDKs werer a thin wrapper over our React Native application, so our users could embed the entire meeting experience into their own mobile apps, with little effort.
This has been our guiding priciple since the inception of the iframe API: to privide a high-level and fully-featured component that can be integrated into other apps.
Today we are taking another step in our mobile jouney by releasing a React Native SDK.
What does this mean? Before today if you had a React Native application we provided you with no way for embedding Jitsi Meet. Now we do!
As mentioned above, our mobile apps are built using React Native and over time we received a number of requests from our community and customers to have an actual React Native SDK. We finally managed to expose it as a React Native library. It’s not that we didn’t have it in the back of our minds, but we focused on native first to cater the needs of our internal consumers.
Exposing a React Native app as a component seems easy on the surface, but being so complex and having so many dependencies made it a lot harder han we had thought. Fortunately, this all changed thanks to Google Summer of Code. We were fortunate to have Filip Rejmus take on the project and kickstart it. After his amazing work, we took over and added the final touches and now it’s available on npm.
First go and grab our package from npm and follow the setup instructions.
In the screenshot below you can see how easy it is to integrate and enable different meeting options into your app, by simply importing the JitsiMeeting component and adding it to your code:
You will have access to the same features as the Jitsi Meet app.
We created a sample app which integrates our brand new SDK together with react-native-navigation, check it out!
This new SDK will make it easier for our users and JaaS customers to embed video meetings into their existing React Native apps, we agerly await your feedback!
Your personal meetings team.
Author: Calin Chitu
The post Introducing the Jitsi Meet React Native SDK appeared first on Jitsi.
In 2017, Jason A. Donenfeld (known for WireGuard®) reported an issue in Tox’s handshake [1]. This issue is called “Key Compromise Impersonation” (KCI). I will try to explain the issue as simple as possible:
In Tox you don’t register an account (e.g. with username and password), but instead your identity is solely based on (asymmetric) cryptographic information, a so-called asymmetric key pair. Such a key pair consists of a public part (public key) and a private part (private key). The public part, as the naming suggests, is public and contained in your ToxID which you share with your contacts to be able to communicate with them via Tox. The private part, again as the name suggests, needs to stay private! If someone gets in possession of your private key, they stole your Tox identity. This could, for example, be the case if someone got physical access to your computer or successfully installed malware on your system, e.g. a so-called trojan horse, to be able to extract data from it. If this happens, you will most likely have multiple problems and your Tox identity may be just one of them. The password you enter when you create your Tox profile, e.g. when you first start qTox client, is used to encrypt your profile and also your private key on your disk. If you start qTox, you need to enter your password to decrypt your private key, to be able to communicate via Tox. Your private key is then stored unencrypted in memory (i.e. RAM) while qTox is running. This means an attacker either needs to get access to your password (steal or crack it) or to read your Tox private key from memory while your Tox chat client is running.
If someone successfully stole your Tox identity (i.e. this private key), they are you – at least in the context of Tox. So they can successfully impersonate you in Tox. Now in this case the KCI vulnerability leads to “interesting” behavior. It is clear that someone who stole your identity is able to impersonate you. But because of the KCI vulnerability, they may also be able to impersonate others to you. This means, to exploit this vulnerability in practice, someone not only needs to successfully steal your private key, but additionally:
In summary, KCI is exploitable, but with a huge effort.
Anyway, this is a real vulnerability and it should be fixed. The current Tox handshake implementation is not state-of-the-art in cryptography and it also breaks the “do not roll your own crypto” principle. As a solution, there is a framework called Noise Protocol Framework (Noise, [2]) which can be used to create a new handshake for Tox. More precisely, the application of Noise will only change a part of Tox handshake — the so-called Authenticated Key Exchange (AKE). Noise-based protocols are already in use in e.g. WhatsApp, which uses it for encrypted client-to-server communication, and WireGuard®, which uses it for establishing Virtual Private Network (VPN) connections. Noise protocols can be used to implement End-to-End Encryption (E2EE) with (perfect) forward secrecy (which is also the case with the current Tox implementation), but further adds KCI-resilience to Tox.
Tobi (goldroom on GitHub) wrote his master’s thesis (“Adopting the Noise Key Exchange in Tox“) on the KCI issue in Tox, designed a new Handshake for Tox based on NoiseIK and implemented a proof-of-concept (PoC) for this new NoiseIK-based handshake by using Noise-C [3]. This PoC has a few drawbacks, which is why it should not be used in practice (see Appendix). If you want to know more about his master’s thesis, see the update in the initial KCI GitHub issue [4].
He applied for funding at NLnet foundation and their NGI Assure fund to continue his work on Tox and to be able to implement a production-ready Noise-based handshake for toxcore. Fortunately, this application was successful [5]. NGI Assure is made possible with financial support from the European Commission’s Next Generation Internet programme (https://ngi.eu/).
The objective of this project is to implement a new KCI-resistant handshake based on NoiseIK in c-toxcore, which is backwards compatible to the current KCI-vulnerable handshake to enable interoperability and smooth transition. The main part of this project is to implement NoiseIK directly in c-toxcore to remove Noise-C as a dependency (as the only other dependency for c-toxcore is NaCl/libsodium) which was used in the PoC and therefore improve maintainability of c-toxcore (see Appendix).
The tasks in this project are:
Noise_IK_25519_ChaChaPoly_SHA512
, but it may change due to new insights in c-toxcore).The plan is to implement this new handshake until July 2023. Since it’s not a trivial task, there are still some obstacles:
“Note that lossy and out-of-order message delivery introduces many other concerns (including out-of-order handshake messages and denial of service risks) which are outside the scope of this document.” (cf. [6])
Both points are not ideal for a handshake based on NoiseIK (i.e. it would be way easier to implement it in a client-server model using TCP), but it should be possible to work this out.
Tobi is available in #toktok (libera.chat) as tobi/@tobi_fh:matrix.org and ready for any input, questions, remarks, discussions or complaints.
The PoC shouldn’t be used in practice/in production because it should be improved in the following aspects (for details see chapter five of Tobi’s thesis [4]):
Noise_IK_25519_ChaChaPoly_SHA512
protocol will be implemented directly in c-toxcore. This will remove Noise-C as a dependency for toxcore (i.e. the only other dependency is NaCl/libsodium) and therefore improve maintainability. Additionally this will reduce the number of possibly vulnerable source lines of code.
“WireGuard” is a registered trademark of Jason A. Donenfeld.
Hey there Fellow Jitsters!
Have you ever considered adding telephony to your Jitsi Meet self-hosted instance?
Up until now you only had the option to run Jigasi and deal with telephony yourself. Many of our users do this every day, but when we asked we learned that there was interest in offloading that part. Could someone else host it?
Today we’re launching a new way to quickly connect to the public telephone network and offer dial-in capabilities to your users without the need for hosting and managing the entire telephony infrastructure: JaaS components. You can give it a try today!
Are you running Jitsi Meet on a Debian instance or are you using Docker? Either way, you can opt-in for this feature and it will be automatically set up. A new JaaS account will be created for you and you’re good to… call.
If you’re running Jitsi Meet on Debian all you need to do is to answer ‘Yes’ to this question and you will have dial-in capability on your Jitsi instance.
Note: A Let’s Encrypt certificate is required and the email address used to generate the certificate will be used also for creating your new JaaS account.
If you’re running Jitsi Meet on Docker you’ll need to set the following variables on your .env file:
Now you can restart your setup with `docker-compose up –force-recreate`
An email will be sent to you, asking you to set up a password for the JaaS admin account:
From the JaaS admin console you can manage your account, see the overall activity and upgrade to another plan if needed.
You’re all set up now! Let’s make a phone call! Join a call on your Jitsi Meet instance and notice how the dial-in option becomes available when trying to invite participants. You can now dial-in to one of the phone numbers provided in the list and you’ll be connected to the meeting.
Get started today, a free trial is available! Please check the JaaS components website for details on pricing.
Jigasi is the first Jitsi component offered as a service, with more to come. Stay tuned!
Your personal meetings team.
Author: Oana Emilia Ianc
The post Self-hosting a fully-featured Jitsi Meet instance just got as easy as pie appeared first on Jitsi.
It’s been a while since we introduced End-to-End Encryption (E2EE) over two years ago. Back then we started with a simple model consisting of a passphrase everyone needed to type and later migrated to a model with randomly generated keys per participant. Each have different characteristic and we ultimately chose to stick with the latter. Today we are introducing a missing piece in the E2EE puzzle: user verification.
User verification was not previously possible in Jitsi Meet. Just like our core E2EE we are basing our implementation on the Matrix protocol. Matrix’s libolm / vodozemac provide a Short Authentication String (SAS) mechanism implementation which developers can use. They even have great documentation on how it works, thanks Matrix!
First, you’d gather in a meeting and turn E2EE on.
Now you’ll see a new option for each participant in their tile menu that allows you to verify them:
After choosing to verify a user a dialog will open with a list of emojis:
.
Wait what? Emoji? These emojis conform the SAS. They have been carefully chosen to avoid ambiguity and make the process more user friendly than comparing random numbers. You can find more information in the Matrix spec. You must verbally compare them with the other participant and if they match, mark it as verified.
Once a user is verified this will be reflected in the user information tooltip:
At this point you can be sure that not only your data is encrypted end-to-end, but also that there is no man-in-the-middle (MITM) attach happening.
User verification is currently available in Jitsi Meet master and deployed in beta. It will be part of the next stable release, but expect more improvements specially in the UX front.
We’d like to thank Robertas Maleckas (ETH Zurich), Prof. Kenny Paterson (ETH Zurich) and Prof. Martin Albrecht (Royal Holloway, University of London) for their work researching Jitsi Meet’s E2EE and encouragement, and Matrix for their tools, which make implementing E2EE a much better experience.
Please note that we still consider our E2EE experimental and are still working on improvements. Please make sure you check out our post on how end-to-end encryption in general does NOT offer a meaningful level of trust and protection when it comes to modern meetings services.
Your personal meetings team.
The post Trust, but verify: introducing user verification appeared first on Jitsi.
Trying to explain something to someone and they just don’t get it? If an image is worth a thousand words how about a diagram? Today we’re excited to announce the availability of whiteboards in Jitsi Meet – the missing piece for all those seeking an educational meeting solution and not only!
We decided to stand in the shoulders of giants on this one. The core implementation comes from Excalidraw, an excellent whiteboarding piece of software, which is Open Source, of course. We made some tweaks and adjustments to have it fit in with our vision. We seek to provide an easy to use feature that enables participants to share ideas and brainstorm without having to seek a third party solution. From now on, meeting moderators can open a whiteboard and have everyone in the call sketch away.
The interface supports a number of tools and settings that keep the collaboration interesting and effective. During a meeting, changes that a participant makes locally via the whiteboard are sent to a server to then distribute those updates only to devices of other participants in the meeting. The whiteboard content can be exported as a png or svg at any time during the meeting, so all that hard work doesn’t go to waste.
If you’re using meet.jit.si, you can go ahead and play with the whiteboard in your meetings right away! For those self-hosting, it can be enabled from the config file, and you’ll need to deploy this simple backend.
As you might already know, we’re firm believers in the power of Open Source, we seek to collaborate with other communities to build solutions everyone can use and we’re excited to bring more to this feature in the future!
Your personal meetings team.
Author: Mihaela Dumitru
The post Introducing whiteboards in Jitsi Meet appeared first on Jitsi.
Jitsi today supports life-streaming conferences to large audiences through our Jibri tool – this tool renders all the media from the conference, and forwards it to a streaming service such as YouTube.
This approach works, but it has limitations. In addition to being computationally expensive, it also introduces substantial latency to the media. This can be a problem when interaction is needed between the participants in the conference and the audience, for example for a text-based question-and-answer session.
This article will describe a new approach to live-streaming media, which uses Jitsi’s builtin functionality, without transcoding, to reach potentially very large audiences with latency comparable to that of a live conference.
The basic approach to media distribution for this solution is straightforward – simply forward media to all the audience members in the same way that they are forwarded today to conference participants – i.e. as individual RTP streams over WebRTC. This can re-use Jitsi’s existing well-tested technology to distribute the media and have it arrive at receivers and be played out to viewers.
The challenge, of course, is to scale Jitsi’s back-end services so they can support sending media to very large numbers of viewers, potentially in the hundreds of thousands or more. The rest of this article will discuss some of the architectural enhancements we need to make to Jitsi to support this.
The first insight that will make this possible is to realize that in a streaming scenario, while the conference’s active participants need to know that they are being watched by an audience, they don’t need to know all the audience members’ identities or presence in real-time; nor do the audience members need to know about each other. Thus, the system can be modified such that presence information about individual audience members is not sent to other conference participants, or to unnecessary parts of the backend; this reduces the amount of signaling traffic substantially.
The second substantial change that we are making to the backend is to be able to have more sophisticated topologies for the Jitsi Videobridges to relay media among them. Currently, when more than one Jitsi Videobridge is used in a conference (in Jitsi’s Octo/Relay technology), the bridges are connected to each other in a full mesh. This topology minimizes the latency for media, but would not scale to very large conferences, where e.g. hundreds of thousands of participants might need several hundred bridges. If every bridge in such a conference were connected to every other one, the bridges could be overloaded just sending media out.
Instead, we are developing technology that can arrange bridges into more elaborate topologies. In particular, our plan for very large conferences is to still have the conference’s active participants be connected to bridges which are arranged in a mesh; but the audience members would then be connected to bridges whose interconnection forms a tree extending from various nodes of the core mesh, so that the core media servers would only need to send media out to a limited number of connections to the audience’s bridges, which would then be forwarded out to the audience, possibly relaying through multiple bridges on the way.
Finally, changes need to be made for the signaling servers used by the Jitsi back-end. While information about audience members only needs to be propagated to selected back-end infrastructure servers, information about a conference’s active participants needs to be forwarded to the entire audience. The existing XMPP servers that the Jitsi back-end uses aren’t designed for this level of load. Thus, we are developing solutions such that this participant information can be mirrored from one XMPP server to another, allowing each server to handle only a manageable number of client connections while still getting the information to the entire audience quickly.
Stay tuned!
Your personal meetings team.
Author: Jonathan Lennox
The post Low-latency conference streaming to very large audiences appeared first on Jitsi.
For a while now Jitsi Meet has been using the RNNoise library to calculate voice audio detection scores for audio input tracks and leveraging those to implement functionality such as “talk while muted” and “noisy mic detection”. However, RNnoise also has the capability to denoise audio.
In this article we’ll briefly go through the steps taken to implement noise suppression using RNnoise in Jitsi Meet.
What’s RNNoise anyway?
RNNoise, as the authors describe it, “combines classic signal processing with deep learning, but it’s small and fast”, this makes it perfect for real time audio and does a good job at denoising.
It’s written in C which allows us to (relatively) easily use it on the Web by compiling it as a WASM module, that combined with a couple of optimizations gets us noise suppression functionality with very little added latency.
Working with Audio Worklets
Previously Jitsi Meet processed audio using ScriptProcessorNode which handles audio samples on the main UI thread. Because the audio track wasn’t altered and we simply extracted some information from a copy of the track, performance issues weren’t apparent. With noise suppression the track gets modified, so latency is noticeable, not to mention that any interference on the main UI thread will impact the audio quality, so we switched to audio worklets.
Audio worklets run in a separate thread from the main UI thread, so samples can be processed without interference. We won’t go into the specifics of implementing one as there are plenty of awesome resources on the web such as: this and this. Our worklet implementation can be found here.
Webpack integration
Even though using an audio worklet looks fairly straightforward there were a couple of bumps along the road.
First off, and probably the most frustrating part was making them work with webpack’s dev server.
Long story short, the dev server has some neat features such as hot module replacement and live reloading, these rely on some bootstrap code added to the output JavaScript bundle. The issue here is that audio worklet code runs under the AudioWorkletGlobalScope’s context which doesn’t know anything about constructs like window, this or self, however the aforementioned boilerplate code makes ample use of them and there doesn’t seem to be a way to tell it that the context in which it’s running is a worklet.
We tried several approaches but the solution that worked for us was to ignore the dev server bootstrap code altogether for the worklet’s entry point, which can be configured in webpack config as follows:
module: { rules: [ ...config.module.rules, { test: resolve(__dirname, 'node_modules/webpack-dev-server/client'), loader: 'null-loader' } ] }
That took care of the dev server, however production webpack bundling also introduced boilerplate which made use of the “forbidden” worklet objects, but in this case it’s easily configurable by specifying the following output options:
output: { ...config.output, globalObject: 'AudioWorkletGlobalScope' }
At this point we had a working worklet (pun intended) that didn’t break our development environment.
WASM in audio worklets.
Next came adding in the RNnoise WASM module. Jitisi uses RNnoise compiled with emscripten (more details in the project: https://github.com/jitsi/rnnoise-wasm). With the default settings the WASM module will load and compile asynchronously, however because the worklet loads without waiting for the resolution of promises we need to make everything synchronous, so we inline the WASM file by passing in -s SINGLE_FILE=1 to emscripten and we also tell it to synchronously compile it with -s WASM_ASYNC_COMPILATION=0. With that in place everything will be loaded and ready to go when audio samples start coming in.
Efficient audio processing.
Audio processing in worklets happens on the process() callback method in the AudioWorkletProcessor implementation at a fixed rate of 128 samples (this can’t be configured as with ScriptProcessorNodes), however RNnoise expects 480 samples for each call to it’s denoise method rnnoise_process_frame.
To make this work we implemented a circular buffer that minimizes copy operations for optimal performance. It works by having both the buffered samples and the ones that have already been denoised on the same Float32Array with a roll over policy. The full implementation can be found here.
To summarize, we keep track of how many audio samples we have buffered, once we have enough of them (480 to be precise) we send a view of that data to RNnoise where it gets denoised in-place (i.e. no additional copies are required). At this point the circular buffer has a denoised part and possibly some residue samples that didn’t fit in the initial 480, which will get processed in the next iteration. The process repeats until we reach the end of the circular buffer at which point we simply start from the beginning and overwrite “stale” samples; we consider them stale because at this point they have already been denoised and sent.
The worklet code gets compiled as a separate .js bundle and lazy loaded as needed.
Use it in JaaS / using the iframe API
If you are a JaaS customer (or are using Jitsi Meet through the iframe API) we have added an API command to turn this on programmatically too! Check it out.
Check it out!
In Jitsi Meet this feature can be activated by simply clicking on the Noise Suppression button.
Since in this case a sound file is probably worth more than 1000 words, here is an audio sample demonstrating the denoising:
Original audio:
Denoised audio:
Your personal meetings team.
Author: Andrei Gavrilescu
The post Enhanced noise suppression in Jitsi Meet appeared first on Jitsi.
It has been a while since our first release of end-to-end encryption for the web app and ever since we have tried to enhance and improve it. One of these enhancements was the introduction of The Double Ratchet Algorithm through libolm and automatic key negotiation.
Each participant has a randomly generated key which is used to encrypt the media. The key is distributed with other participants (so they can decrypt the media) via an E2EE channel which is established with Olm (using XMPP MUC private messages). You can read more about it in our whitepaper.
Even though the actual encryption/decryption API is different on web and mobile (“Insertable Streams” vs native Encryptors/Decryptors), the key exchange mechanism seemed like something that could be kept consistent between the two (even three, considering Android and iOS different) platforms. This took us to the next challenge: how can we reuse the JS web implementation of the double ratchet algorithm without any major changes, while also keeping in mind the performance implications it might have on the mobile apps.
Since our mobile apps are based on React Native the obvious solution was to wrap libolm so we could use the same code as on the web, but not all wrappers are created equal.
There are three major drawbacks while using this approach:
The first issue might not have had such a major impact on this specific use case, since the key exchange happens not too frequently. The fact that every change has to be implemented twice is very likely to be a problem in the future, while the last issue, the asynchronicity of the bridge methods is definitely a showstopper since it would break the consistency of the web and mobile interfaces.
JavaScript Interface (JSI) is a new layer between the JavaScript engine and the C++ layer that provides means of communication between the JS code and the native C++ code in React Native. Since it doesn’t require serialization is a lot faster than the traditional bridge approach, in addition to allowing us to provide a performant sync API.
As we’ll show in the what follows, it also solves the other two problems the classical approach poses, the implementation has to be done/modified only once (most of the time, since some glue code is still required) and, most importantly, the native methods called thought JSI can be synchronous.
The first challenge was to find the proper way of initializing the C++ libraries and exposing the so-called “host functions” (these are C++ functions callable from the JS code).
For this we took advantage of the mechanism for native modules and the way they are initialized by the RN framework, thus creating OlmModule.java and OlmPackage.java. OlmPackage is just a simple ReactPackage that has as native modules OlmModule.
Within the lifecycle of this ReactContextBaseJavaModule, the actual magic happens: loading the C++ libraries and exposing the necessary behavior to the JS side.
The C++ library is loaded inside a static initializer.
Exposing the host functions to the JS is done in the initialize method of the OlmModule, through the JNI native function nativeInstall. This method is implemented in cpp-adapter.cpp, where, besides some JNI-specific code, the jsiadapter::install is called, where the host functions will actually be exposed. It is here where the Android-specific glue code ends, the jsiadapter being platform agnostic, used, as we’ll show, by the iOS as well.
We also used the iOS native bridge mechanism for initialization, but here the implementation is even easier: Olm.h and Olm.mm contain the module, where, in the setBridge method, jsiadapter::install is called, exposing the host functions.
As stated above, both Android and iOS specific code ends up calling the platform agnostic jsiadapter::install method. It is here where the C++ methods are exposed, i.e. JS objects are set on jsiRuntime.global with methods that call directly into the C++ code.
Object module = Object(jsiRuntime); //…add methods to module jsiRuntime.global().setProperty(jsiRuntime, "_olm", move(module));
This object will be accessible on the JS side via a global variable. For our use case only one object is enough, but it is here where as many objects as necessary can be exposed, without having to change any of the platform specific code.
auto createOlmAccount = Function::createFromHostFunction( jsiRuntime, PropNameID::forAscii(jsiRuntime, "createOlmAccount"), 0, [](Runtime &runtime, const Value &thisValue, const Value *arguments, size_t count) -> Value { auto acountHostObject = AccountHostObject(&runtime); auto accountJsiObject = acountHostObject.asJsiObject(); return move(accountJsiObject); }); module.setProperty(jsiRuntime, "createOlmAccount", move(createOlmAccount)); auto createOlmSession = Function::createFromHostFunction( jsiRuntime, PropNameID::forAscii(jsiRuntime, "createOlmSession"), 0, [](Runtime &runtime, const Value &thisValue, const Value *arguments, size_t count) -> Value { auto sessionHostObject = SessionHostObject(&runtime); auto sessionJsiObject = sessionHostObject.asJsiObject(); return move(sessionJsiObject); }); module.setProperty(jsiRuntime, "createOlmSession", move(createOlmSession));
Two methods are exposed: createOlmAccount and createOlmSession, both of them returning HostObjects.
It’s a C++ object that can be registered with the JS runtime, i.e. exposed methods can be called from the JS code, but it can also be passed back and forth between the JS and C++ while still remaining a fully operational C++ object.
For our use case, the AccountHostObject and SessionHostObject are wrappers over the native olm specific objects OlmAccount and OlmSession and they contain methods that can be called for the JS code (identity_keys, generate_one_time_keys, one_time_keys etc. for AccountHostObject, create_outbound, create_inbound, encrypt, decrypt etc. for SessionHostObject).
The way this methods are exposed from C++ to JS is again through host functions, in the HostObject::get method:
Value SessionHostObject::get(Runtime &rt, const PropNameID &sym) { if (methodName == "create_outbound") { return Function::createFromHostFunction( *runtime, PropNameID::forAscii(*runtime, "create_outbound"), 0, [](Runtime &runtime, const Value &thisValue, const Value *arguments, size_t count) -> Value { auto sessionJsiObject = thisValue.asObject(runtime); auto sessionHostObject = sessionJsiObject.getHostObject<SessionHostObject>(runtime).get(); auto accountJsiObject = arguments[0].asObject(runtime); auto accountHostObject = accountJsiObject.getHostObject<AccountHostObject>(runtime).get(); auto identityKey = arguments[1].asString(runtime).utf8(runtime); auto oneTimeKey = arguments[2].asString(runtime).utf8(runtime); sessionHostObject->createOutbound(accountHostObject->getOlmAccount(), identityKey, oneTimeKey); return Value(true); }); } }
Example:
const olmAccount = global._olm.createOlmAccount(); const olmSession = global._olm.createOlmSession(); olmSession.create_outbound(olmAccount, “someIdentityKey”, “someOneTimeKey”);
As shown, global._olm.createOlmAccount() and global._olm.createOlmSession() will return a HostObject. When calling any method on it (create_outbound in the example) the HostObject::get method will be called with the proper parameters, i.e. the Runtime and the method name, so we use this method name to expose the desired behavior.
Note that the calling HostObject can be fully reconstructed on the C++ side,
auto sessionJsiObject = thisValue.asObject(runtime); auto sessionHostObject = sessionJsiObject.getHostObject<SessionHostObject>(runtime).get();
Parameters can also be passed from JS to C++, including other HostObjects:
auto accountJsiObject = arguments[0].asObject(runtime); auto accountHostObject = accountJsiObject.getHostObject<AccountHostObject>(runtime).get(); auto identityKey = arguments[1].asString(runtime).utf8(runtime); auto oneTimeKey = arguments[2].asString(runtime).utf8(runtime);
As mentioned from the very beginning, keeping the web and mobile interfaces consistent was the main goal, so, after implementing all the necessary JSI functionality, it was all wrapped into some nice TypeScript classes: Account and Session.
Their usages are shown in the example integration that comes with the SDK:
const olmAccount = new Olm.Account(); olmAccount.create(); const identityKeys = olmAccount.identity_keys(); const olmSession = new Olm.Session(); olmSession.create(); olmSession.create_outbound(olmAccount, idKey, otKey);
This is the exact same API that the olm JS package exposes. Mission accomplished!
Implementing this RN library that exposes the libolm functionality is just a piece of the bigger mobile E2EE puzzle. It will be integrated in the Jitsi Meet app and used for the implementation of the E2EE communication channel between each participant, i.e. for exchanging the keys.
Since the WebCrypto API is not available in RN, we have to expose a subset of the methods for key generation (importing, deriving, generating random bytes) and again we plan to do it through JSI.
Turn out the olm library contains these methods, so it is possible we’ll expose them in the react-native-olm library.
WebRTC provides a simple API that allows us to obtain the same result that we do on the web with “insertable streams”: FrameEncryptorInterface and FrameDecryptorInterface, in the C++ layer.
The encryptor is to be set on an RTPSender, while the decryptor on the RTPReceiver and they will basically act just as a proxy for each frame that is sent/received, making it possible to add logic for constructing/deconstructing the SFrame out of each frame that is sent/received.
The fact that this code will run on the native side is of major importance, since the performance issues caused by the communication between the JS and native would be major in this case, since those operations would have to be done many times a second, for each frame, probably making the audio and video streams incoherent.
The only operations that will be done from the JS side is the enabling of the E2EE, as well as the key exchange steps. We will have to expose the methods for setting the keys for the AES-GCM from the JS to the native FrameEncryptors and FrameDecryptors, most likely using the JSI path.
While we were busy working on this the good folks over at Matrix have created* vodozemac, a new libolm implementation in Rust and it highly recommends migrating to this SDK going forward. At the moment it only provides bindings for JS and Python, while the C++ is still in progress. We’ll keep a close eye here and update to vodozemac after we have all the pieces in place.
You can start tinkering with it today, here is the GitHub repo.
Your personal meetings team.
Author: Titus Moldovan
The post A stepping stone towards end-to-end encryption on mobile appeared first on Jitsi.
Back in 2018 we first released cascaded bridges based on geo-location on meet.jit.si. Then in 2020 as we struggled to scale the service to handle the increased traffic that came with the pandemic we had to disable it because of the load on the infrastructure. And now it’s finally back stronger and better!
In this post we’ll go over how and why we use cascaded bridges with geo-location, how the new system is architectured, and the experiment we ran to evaluate the new system.
We want to use geolocation for the usual reason – connect users to a nearby server to optimize the connection quality. But with multiple users in a conference the problem becomes more complex. When we have users in different geographic locations, using a single media server is not sufficient. Suppose there are some participants in the US and some in Australia. If you place the server in Australia, the US participants will have a high latency to the server and an even higher latency between each other – their media is routed from the US to AU and back to the US! Conversely if you place the server in the US the Australian participants have the same issues.
We can solve this by using multiple servers and having participants connect to a nearby server. The servers forward the media to each other. This way the “next hop” latency is lower, and so is the end-to-end latency for nearby endpoints.
There are many new things in our backend architecture!
We used to have shards consisting of a “signaling node” and a group of JVB (jitsi-videobridge, our media server) instances. In order to make bridges in different regions available for selection, we just interconnected all bridges in all shards. And this is exactly what broke when we had to scale to 50+ shards and 2000+ JVBs.
In the new architecture JVBs are no longer associated with a specific shard. A “shard” now consists of just a signaling node (running jicofo, prosody and nginx). We have a few of these per region, depending on the amount of traffic we expect. Independently, we have pools of JVBs, one pool in each region, which automatically scale up and down to match the current requirements.
In addition we have “remote” pools. These are pools of JVBs which make themselves available to shards in remote regions (but not in their local region). For example, we have a remote pool in us-east which connects to signaling nodes in all other regions. This separation of “local” vs “remote” pools is what allows us to scale the infrastructure without the number of cross-connections growing too much
As an example, in the us-east region (Ashburn) we have 6 signaling nodes (“shards”) and a pool of JVBs available to them. This is the us-east “local” pool. We also have multiple “remote” JVB pools connected to the shards — one from each of the other regions (us-west, eu-central, eu-west, ap-south, ap-northeast, ap-southeast). Finally, we have a us-east “remote” JVB pool connected to shards in all other regions.
In late 2021 we completely replaced the COLIBRI protocol used for communication between jicofo and JVBs. This allowed us to address technical debt, optimize traffic in large conferences, and use the new secure-octo protocol.
In contrast to the old octo protocol, secure-octo connects individual pairs of JVBs. They run ICE/DTLS to establish a connection, and then use standard SRTP to exchange audio/video. This means that a secure VPN between JVBs is no longer required! Also, we can filter out streams which are not needed by the receiving JVB.
In the experiments we ran in 2018 we found that introducing geo-located JVBs had a small but measurable negative effect on round-trip-time between endpoints in certain cases. Notably endpoints in Europe had, on average, a higher RTT when cascading was enabled. We suspected that this was because we use two datacenters in Europe (in Frankfurt and London) and many endpoints have a similar latency to both. In such cases, introducing the extra JVB-to-JVB connection has almost no impact on the next-hop RTT, but increases the end-to-end RTT between endpoints.
To solve this problem we introduced “region groups”, that is we grouped the Frankfurt and London regions, as well as the Ashburn (us-east) and Phoenix (us-west) regions. With region groups, we relax the selection criteria to avoid using multiple JVBs in the same region group.
As an example, when a participant in London joins a conference, we will select a JVB in London for them. Then, if a participant in Berlin (closer to Frankfurt than London) joins we will use that same JVB in London instead of selecting a new one in Frankfurt.
The new meet-jit-si infrastructure allowed us to easily perform an experiment comparing the case of no bridge cascading (control), cascading with no region groups defined (noRG) and cascading with region groups (grouping us-east and us-west, as well as eu-central and eu-west). We had 3 experimental “releases” live for a period of about two weeks, with conferences randomly distributed between them and the main release. We measured two things: end-to-end round trip time between endpoints and round-trip-time between an endpoint and the JVB it’s connected to (next-hop RTT).
By and large the results show that cascading works as designed and the introduction of region groups had the desired effect.
With cascading we see significantly lower end-to-end RTT in most cases. When the two endpoint are in the same region:
When the two endpoints are in different regions we see a slight increase when region groups are used, but the overall effect of cascading is positive.
The next-hop RTT is also significantly reduced with cascading. Overall we see a 29% decrease (from 223 to 158 milliseconds) when the endpoint and server are (were) on different continents.
You can see the full results here.
The post Bridge cascading with geo-location is back appeared first on Jitsi.
Virtual / hybrid meetings are part of our everyday life now. Some meet in their home office, kitchen, slouching in the couch, while taking a walk, or even while driving! We won’t encourage you to have meetings while driving since any distraction can be fatal, but we know many are doing it and decided to implement a distraction free mode for those who choose to have a meeting while in the car.
On the latest Jitsi Meet beta version (22.2.0) you will notice a new button in the drawer: Car Mode. This will open the car mode screen, a brand new in-meeting experience which has the basic meeting controls like ending the meeting, selecting sound device and microphone muting, but with enhanced sizes so you can easily use them without much distraction.
Car mode also saves you bandwidth as it disables all incoming/outgoing video streams, for a distraction free meeting experience. Another useful feature in car mode is push-to-talk: simply long-press the un-mute button to keep the microphone active and release it to automatically mute again. Push to talk is especially useful for passengers in the car, or when joining a conference while having a stroll. That’s right, you don’t need to be in a car to use this feature!
Those of you with Apple CarPlay enabled vehicles may have noticed the car did not show up in the sound devices selection and this created some confusion. This has been fixed and you’ll now see an entry in the audio device selection drawer:
This first iteration of the feature only opens the car mode when selected from the toolbar button. We are planning to add automatic detection for the user being in a car and automatically offering to switch to this mode in that case.
Android users: worry not, better Android Auto integration will come!
Your personal meetings team.
Author: Horatiu Muresan
The post Introducing Car Mode! appeared first on Jitsi.
Tsahi Levent-Levi, also known as BlogGeek.me, has established himself as, arguably the most prominent WebRTC analyst. He has been commenting on the industry for the past decade, while also training WebRTC professionals, co-running the testRTC business and that’s without even getting into his prior real-time comms experience with Amdocs and Radvision. We asked Tsahi if he could share his thoughts on RTC CPaaS and specifically how Jitsi as a Service fits into the space. The following is what he wrote. Enjoy!
In the past two years I’ve seen many service providers and enterprises run their own video meetings based on Jitsi. It is one of the easiest ways to get video meetings implemented today – with all the bells and whistles.
After 8×8’s acquisition of Jitsi, the Jitsi team has been hard at work on 4 different tracks:
The introduction of JaaS is a very interesting angle to Jitsi, especially when coupled in with the open source project itself.
Look at today’s video API solutions – the CPaaS vendors who happen to offer video APIs so you can develop your own communication applications. What you’ll find is great platforms for developing your services, with the small caveat of being vendor-locked. The APIs of these vendors are specific to them. Any decision to switch from one vendor to another would necessitate a rewrite of the code communication aspects of the application. And they offer no open source alternative of their own – one where you can just install, host and maintain your environment on your infrastructure of their technology.
That’s not something bad or new. It is how money is made in many of the cloud API vendors across all sectors of the software industry.
In recent years, we have seen a shift and focus in lowcode solutions with video APIs.
Video APIs were mostly about publish and subscribe. You could publish your microphone, webcam and/or screen, and subscribe to other publisher’s content. While this approach is quite flexible, it is also daunting and prone to performance issues. As they say, with great power comes great responsibility, only that the responsibility here was shifted towards the developers using the APIs.
Now, it is understood that this approach can’t scale and a different approach is needed. One such approach is to reduce the complexity by offering a higher level abstraction that handle the complexities on their own. These can come in the form of a reference application, new API set or a UI widget that can be embedded into applications.
Jitsi was similar in a way. It always offered a video bridge (the open source media server), but also the Jitsi Meet experience – a complete implementation of video meetings provided in open source and as a hosted end-user service.
Enter JaaS – Jitsi as a Service
Jitsi officially announced JaaS in January 2021. It added another layer to the Jitsi story – one of CPaaS and Video APIs.
With the Jitsi ecosystem, you now had 3 different ways to make use of Jitsi:
This reminds me of the way WordPress works – you could take the WordPress framework, install, host and maintain it on your own – or you can use Automattic’s or other vendors managed hosting service for WordPress, removing a lot of that headaches and focusing on what’s important to you – the actual site content.
With Jitsi, you can decide to run and host everything on your own or use JaaS, having the Jitsi team manage and host it for you, removing a lot of the infrastructure headaches.
What I love most about this is the dogfooding part. This isn’t only about taking an open source project and making Video APIs out of it. Jitsi Meet is a managed service that has seen its own growing pains as it needed to scale in recent years. A lot of the work done in the Jitsi codebase and the DevOps scripting around it comes directly from end users who communicate and complain directly to the Jitsi team.
As if this weren’t enough, in November 2021, Jitsi announced a kind of an a la carte hosting offering. As that announcement states:
We are currently working on a service that would let Jitsi users easily connect their self-hosted Jitsi deployments to 8×8’s PSTN, LiveStreaming, Recording and transcriptions clouds.
This means that developers can now run their own Jitsi deployment, but then connect it to managed “features” of JaaS on a case by case basis. Want to have PSTN? Push a meeting to a live stream on YouTube? Record a session? Transcribe it? All of these things add complexity to a deployment and can now be abstracted out and “outsourced” to JaaS while maintaining your own hosted Jitsi cluster.
In a way, Jitsi unbundled their JaaS offering, making it simpler to adopt.
From Jitsi, to JaaS through the new a la carte offering, a flexible solution has emerged for developers needing video meeting solutions. One that can be consumed in many different ways.
This reduces the vendor lock-in challenge that many video API vendors have, simply because you are never bound to the JaaS offering in any way other than them offering the best possible service. Not happy? Take your code and host it on your own. No rewriting or vendor migration necessary.
I think this gives developers a very compelling solution that is a kind of a two-way street:
Developers can start by self hosting their own Jitsi Meet service.
This keeps control in the early days as they get acquainted with the platform and its nuances. At some point as they grow, it makes sense to think about a more global and scalable deployment. And one easy way to get there is to simply switch from the self hosted path towards the managed one by using JaaS.
Developers can also start from the managed JaaS solution.
The beauty of it here is that if they are unhappy, or decide they want to do things differently, they can simply install their own Jitsi servers and start maintaining their own infrastructure – without changing the actual application code as they do that.
With Jitsi and JaaS you can move from a self hosted to a managed service or vice versa.
Here’s the thing. If you’re looking for a video meeting service to make your own, and only care about a bit of customization to the video experience itself, then Jitsi is a great solution.
It enables you to go the route of a self hosted open source solution or to go with a fully managed video infrastructure and video APIs approach. All that wrapped with one of the most popular WebRTC media servers on the market.
The post Jitsi as a Service: a two-way street, by Tsahi Levent-Levi appeared first on Jitsi.
Here is the formal announcement that the development for the next major version 5.6.0 is now frozen. The focus has to be on testing the master branch.
Also, the master branch should not get commits with new features till the branch 5.6 is created, expected to happen in 2-4 weeks, a matter of how testing goes on. Meanwhile, the commits with new features in the C code can be pushed to personal branches, new pull requests can still be done, but they will be merged after branching 5.6.
Can still be done commits with documentation improvements, enhancements to related tools (e.g., kamctl, kamcmd), merging exiting pull requests at this moment, exporting missing KEMI functions and completing the functionality of the new modules added for 5.6.
Once the branch 5.6 is created, new features can be pushed again to master branch as usual. From that moment, the v5.6.0 should be out very soon, time used for further testing but also preparing the release of packages.
If someone is not sure if a commit brings a new feature, just make a pull request and it can be discussed there on github portal or via sr-dev mailing list.
A summary of what is new in upcoming 5.6 is going to be built at:
Upgrade guidelines will be collected at:
Everyone is more than welcome to contribute to the above wiki pages, especially to the upgrade guidelines, to help everyone else during the migration process from v5.5.x to 5.6.x.
Thanks for flying Kamailio!
The post Development For v5.6.0 Is Frozen first appeared on The Kamailio SIP Server Project.Conner Luzier is a TADHack regular. Check out his hacks from TADHack-mini Orlando in 2018, 2019, and 2020. He’s also presented at Enterprise Connect several times, how many soon to be graduates have such industry exposure!
He’s looking for work, ideally full-time, but is happy for project work to build his references in the industry. Conner has shown his abilities and get-up and go time and again at TADHack. So please get in contact with Conner, thank you.
From TADHack-mini Orlando 2020. TeleQuest (Garrett Curtis, Conner Luzier, Jenn Gibson, Eric Good) – won Apidaze and Intelepeer prizes. TeleQuest is a phone-based adventure game, perfect to keep your social distance while connecting with others.
From TADHack-mini Orlando 2019. SaveMe by Giancarlos Toro, Conner Luzier, Thiago Pereira, Vikki Horn won prizes from Flowroute, Telesign, and VoIP Innovations. It is a secure video reporting app using WebRTC and SMS.
From TADHack-mini Orlando 2018. Polls IO by Conner, Paul, Giancarlos used VoIP Innovations to create a service that allows local government to be be more involved with their constituents and allows their constituents to be more involved with the local government by allow easy opinion polling on new projects, bills, etc. This will also allow the local governments to be able to easily send out updates on new legislature and its progress. This polling service can be generalized to be used for businesses and events. They won the VoIP Innovations prize and t-shirts from Code for Orlando. See their pitch video here, and their slides here, and a video of their demo at Enterprise Connect here.
Here’s Conner at Enterprise Connect in 2019, third from the right.
The post Conner Luzier – seeking full-time or project work in programmable comms / WebRTC appeared first on Blog @ TADHack - Telecom Application Developer Hackathon.
TADHack is the largest global hackathon focused on programmable communications since 2014. This year we are partnering with Network X (Broadband World Forum, 5G World, and Telco Cloud) as the pre-event hackathon. This is similar to what we do before Enterprise Connect with TADHack-mini Orlando in March.
Thank you to STROLID, Symbl.ai, Telnyx, Jambonz, and Subspace for making TADHack possible.
At TADHack Global 2021 Symbl.ai achieved an amazing result, 21 hacks, and Telnyx was even more impressive with 30 hacks; all created over one weekend by developers from around the world.
The locations we anticipate running are: Chicago, Tampa, Colombia, South Africa, Berlin, UK, Sri Lanka, Amsterdam, France, and remote (anywhere in the world). We are adding new locations, e.g. TADHack France (run by Le Voice Lab), and Amsterdam as we are the pre-event hackathon to Network X (Broadband World Forum, 5G World, and Telco Cloud).
We had great success with TADHack Teens in Sri Lanka in 2021 thanks to hSenid Mobile and Ideamart, and plan to expand this initiative to South Africa and the US. We’re training the next generation of programmable communications engineers and entrepreneurs, as well as excellent summer interns!
We have two additional initiatives for 2022:
The TADHack website is still in development, and definitely needs some accessibility improvements Save the date 15-16 Oct 2022, for the largest and longest running global hackathon focused on programmable communications. Thank you.
The post TADHack Global 2022 Launch, Save the Date, 15-16 Oct appeared first on Blog @ TADHack - Telecom Application Developer Hackathon.
Over the years we’ve had many accessibility hacks. Last year we had an excellent hack Colloquia11y by team Similarly Geeky, comprising Lily Madar and Steven Goodwin. Its an accessible conferencing solution (using Text To Speech and Speech To Text).
For TADHack 2022 we’ve created an Accessibility Prize that will be judged by Chris Lewis of Lewis Insight and Manisha Amin, CEO of The Centre for Inclusive Design. Chris and Manisha are also providing some resources to help hackers better understand Accessibility, and how to pitch to a blind person.
I’ve known Chris for several decades He’s been a telecom analyst for 38 years, legally blind for 25 years, and started a focus on accessibility about 7/8 years ago.
Chris shares his real world experience of using the web as a blind person, getting to a video on a page can take 20-100 clicks. Its like accessing a web page through an IVR (Interactive Voice Response), serial access to what a sighted person accesses in parallel. He demos the challenges he faces using the latest TADHack.com website. I’ve got some work to do there!
Chris also shares the challenges in using his fancy coffee machine. It has several error lights, e.g. no water, no beans, grounds tray full, drip tray full. But no way to know which one is lit. So he checks all 4 possible problems each time one of them has an error. Chris provides many wonderful insights into the everyday challenges he faces.
Chris highlights the importance Alexa and Siri play in helping accessibility, for example he needed to know how to spell ‘curfew’. Another challenge is between Zoom, Teams, and many of the other conferencing platforms have different shortcuts, and one of the reasons he’s not yet used Android’s Accessibility tool TalkBack reader is its like learning yet another language, as he already uses iOS and Microsoft’s accessibility tools.
This 18 min interview is a mine of insights on accessibility challenges and ways of thinking about accessibility. More than I cover in the written section of this weblog, so check out the video. For me, the take away on designing for the edge cases, means the center is free is powerful. And when giving your pitch, focus on the story, and avoid saying, ‘as you can see on the slide’
Thank you Chris.
Coming soon.
The post Accessibility Resources for TADHack Global 2022 appeared first on Blog @ TADHack - Telecom Application Developer Hackathon.
The Kamailio v5.5.0 was released about one year ago, therefore it is time to set the milestones for getting 5.6.0 out.
It has been proposed to freeze the development on Thursday, April 14, 2022, test till mid of May or so, then release the next major version 5.6.0.
There is a lot of development to existing components and a couple of new modules.
If anyone wants a different time line towards 5.6.0, let’s discuss on sr-users@lists.kamailio.org mailing list and choose the one that suits most of the developers.
Thanks for flying Kamailio!
The post Freezing The Development For v5.6.0 first appeared on The Kamailio SIP Server Project.Giovanni Tommasini from Evoseed.io published a Github repository with resources about how to deploy Kamailio with TLS in a Docker container using Let’s Encrypt certificates. It can be found at:
It should be a good starting point for anyone wanting to start a Kamailio instance with TLS enabled for secure and encrypted SIP signalling traffic.
Check also Giovanni’s blog post about this project:
We appreciate such contributions to the community, if you write or you are aware of interesting articles about how to deploy and use Kamailio, we are more than happy to publish news about them on kamailio.org website, just notify us about them via sr-users mailing list!
Thanks for flying Kamailio!
The post Docker Container With Kamailio And Let’s Encrypt first appeared on The Kamailio SIP Server Project.Thanks to a generous sponsor, qTox development gets founded for a year! Anthony Bilinski got funded to work full-time for a year and sphaerophoria part-time for a few months. You can read more about this in qTox’s blog post, where Anthony goes into detail on his plans for the year.
Kamailio SIP Server v5.4.8 stable is out – a minor release including fixes in code and documentation since v5.4.7. The configuration file and database schema compatibility is preserved, which means you don’t have to change anything to update.
Kamailio® v5.4.8 is based on the latest source code of GIT branch 5.4 and it represents the latest stable version. We recommend those running previous 5.4.x or older versions to upgrade. There is no change that has to be done to configuration file or database structure comparing with the previous releases of the v5.4 branch.
Note that 5.4 is the second last stable branch, still officially maintained by Kamailio project development team. The latest stable branch is 5.5, with v5.5.4 released out of it.
Resources for Kamailio version 5.4.8
Source tarballs are available at:
Detailed changelog:
Download via GIT:
# git clone https://github.com/kamailio/kamailio kamailio # cd kamailio # git checkout -b 5.4 origin/5.4
Relevant notes, binaries and packages will be uploaded at:
Modules’ documentation:
What is new in 5.4.x release series is summarized in the announcement of v5.4.0:
Thanks for flying Kamailio!
The post Kamailio v5.4.8 Released first appeared on The Kamailio SIP Server Project.Hey there Fellow Jitsters!
We’ve got some great news to share: Jitsi has been selected to participate in Google Summer of Code 2022!
We had a several year hiatus, but we are thrilled to be back. GSoC has been a very successful program for us, thanks to it we got tons of new features and several projects, and even some new colleagues!
There is plenty of time to apply as a student, if you are so inclined. Take a quick look at the getting stated guide from Google, pick an idea from our ideas list (or propose your own!) and apply!
Our community is always a great place to discuss project ideas before applying, we’ll welcome you all with open arms.
Let’s make GSoC 2022 our most successful one yet!
Last, but not least, huge thanks to Google for selecting Jitsi to participate in the GSoC program.
The post Jitsi is back at Google Summer of Code appeared first on Jitsi.
Test
Today we are releasing an often requested feature / package from the Jitsi community. We’re happy to announce the availability of the Jitsi Meet React SDK. This new SDK simplifies the integration of the Jitsi Meet External API with applications using React. It features simple React components that allow one to embed the Jitsi Meet experience onto a React based application, with full customization capabilities.
Let’s explore how to use it!
First we’ll create a new project using create-react-app, but you can start with an application you’re already working on, just make sure it’s using React 16 or higher.
create-react-app showcase-jitsi-react-sdk
Next let’s install the SDK as a dependency to access its modules.
npm install @jitsi/react-sdk
In App.js (in the created project) let’s import the first module:
import { JitsiMeeting } from '@jitsi/react-sdk';
We’ll instantiate the JitsiMeeting React component that requires the roomName prop, but keep in mind that you can use other props as well to get more control and enhance your client’s experience.
Let’s use the component in our application.
<JitsiMeeting roomName = { 'YOUR_CUSTOM_ROOM_NAME } // make sure it's a good one! />
The result in your browser should look something like this:
Let’s tweak the styling a bit:
<JitsiMeeting roomName = { 'YOUR_CUSTOM_ROOM_NAME' } getIFrameRef = { node => node.style.height = '800px' } />
Now we’re cooking! Next we could add some config overwrites. Let’s say we’d like our participants to join the meeting with muted audio and make sure of it by hiding the corresponding pre-meeting button as well:
<JitsiMeeting configOverwrite = {{ startWithAudioMuted: true, hiddenPremeetingButtons: ['microphone'] }} roomName = { 'YOUR_CUSTOM_ROOM_NAME' } getIFrameRef = { node => node.style.height = '800px' } />
Done! You can override the same options as you can with the external API, that is, most of these. We also made it possible to add event listeners easily, be sure to checkout the project’s README or our handbook.
This is another component provided by the SDK that’s preconfigured to work with JaaS. You’ll need to generate a JWT and pass an appId, and you’re off to the races. Make sure you read the JaaS console guide too! Here is a simple example:
<JaaSMeeting appId = { 'YOUR_APP_ID' } jwt = { JWT } roomName = { 'YOUR_CUSTOM_ROOM_NAME' } />
With this SDK integrating meetings in React applications should be as simple as it gets! If you happen to come across any issues you can reach out to us in the GitHub issue tracker or our community.
, Your personal Meetings team.
The post Introducing the Jitsi Meet React SDK appeared first on Jitsi.
The Metaverse might not fully exist yet (and we don’t even know when it will) – but Meta is developing the world’s fastest AI supercomputer, which is slated to be finished in mid-2022.
We’ve all heard about the Metaverse in the last several months: a network of 3D virtual worlds focused on social connection, accessed by VR or AR goggles. In 2021 Facebook renamed itself “Meta Platforms” and declared itself devoted to developing the Metaverse. It’s thought that this virtual reality will be the next iteration of the internet. Though when, specifically, is a mystery.
Meta actually started its AI research ten years ago, with the Facebook AI Research lab. The lab developed chatbot design, AI systems to forget unnecessary information, and even synthetic skin that gives robots the ability to have a sense of touch. In 2017, Meta launched its first AI supercomputer, which leveraged open source and publicly available data sets. The new supercomputer, named AI Research SuperCluster – or RSC – will use its powerful hardware to train large computer vision and natural language processing models. Real time voice translation will be one of the main highlights for RSC, so that people all over the world will be able to chat in the Metaverse in real time, all speaking different languages and seamlessly communicating with one another.
In a blog post, Meta explains what the AI can already do, which includes translating languages and identifying harmful content. Upon completion, RSC should be able to accomplish building entirely new AI systems to power real time voice translation for huge groups of people, combining computer vision, natural language processing, and speech recognition. According to Mark Zuckerberg, RSC is already the fifth fastest computer in the world. Built from thousands of processors and currently hiding away in an undisclosed location, it is already operational, but will be launched later this year. The current computational infrastructure will need to improve a thousandfold to power the metaverse.
It makes sense that in order to fuel the Metaverse, RSC will require an immense amount of rapid computational power. There’s a ton of different ways to describe the computational power at play here – quintillions of operations per second, petaflops (one thousand teraflops) of computing in less than a millisecond, 5 exaflops of mixed precision computing at its peak, trillions of parameters in the neural networks. The natural language processor GPT-3 has 175 billion parameters alone. The current limit to RSC’s growth is the time it takes to train a neural network, which can take weeks of computing for large networks. New neural networks need to be built quickly in order to accomplish real time voice translations at the desired scale for the Metaverse.
The old system used 22,000 Nvidia V100 GPUs, and currently uses 6,080 Nvidia A100 GPUs. By later this year, when RSC is ready to be launched, it will be using 16,000 Nvidia A100 GPUs. RSC will train models with more than a trillion parameters on data sets as large as an exabyte, or 36,000 years of high-quality video. By connecting to 16,000 GPUs, the cache and storage will have a capacity of 1 exabyte, or 1 billion billion bytes, serving 16 terabytes per second of data to the system.
With this impressive computational power, RSC will enable new AI models that can learn from trillions of examples. But where, exactly, will these examples come from? Unlike its predecessor, RSC will train machine learning models on data sourced from the social media owned by Meta – Facebook, Instagram, WhatsApp, and others. And this might make you raise your eyebrows. What about security, and data privacy? Well, according to Meta, RSC has been designed from its infancy with privacy and security in mind, with the supercomputer being isolated from the internet, and having no inbound or outbound connections. Traffic will flow only from Meta’s production data centers and the entire data path is encrypted.
The COVID-19 pandemic has caused some setbacks on the project, just as it has for all industries. Supply chain constraints and other issues made it difficult to get necessary materials to build RSC, like chips and GPUs, and even basic construction materials. But if all goes according to plan, 2022 will be a big year for AI becoming faster, smarter, and more powerful than ever.
Kamailio SIP Server v5.5.4 stable is out – a minor release including fixes in code and documentation since v5.5.3. The configuration file and database schema compatibility is preserved, which means you don’t have to change anything to update.
Kamailio® v5.5.4 is based on the latest source code of GIT branch 5.5 and it represents the latest stable version. We recommend those running previous 5.5.x or older versions to upgrade. There is no change that has to be done to configuration file or database structure comparing with the previous releases of the v5.5 branch.
Resources for Kamailio version 5.5.4 Source tarballs are available at:
Detailed changelog:
Download via GIT:
# git clone https://github.com/kamailio/kamailio kamailio # cd kamailio # git checkout -b 5.5 origin/5.5
Relevant notes, binaries and packages will be uploaded at:
Modules’ documentation:
What is new in 5.5.x release series is summarized in the announcement of v5.5.0:
Thanks for flying Kamailio! We wish you a smooth time during this crisis and to stay healthy!
The post Kamailio v5.5.4 Released first appeared on The Kamailio SIP Server Project.