The Linux Foundation has announced a second year of sponsorship for the ongoing maintenance of the Linux manual pages (man-pages) project, led by Alejandro (Alex) Colomar. This critical initiative is made possible through the continued support of Google, Hudson River Trading, and Meta, who have renewed their sponsorship to ensure the long-term health of one of the most fundamental resources in the Linux ecosystem.
Since 2020, Alex Colomar has been the lead maintainer of the man-pages, providing detailed documentation for system calls, library functions, and other core aspects of the Linux API. While Alex initially maintained the project voluntarily, sponsorship beginning in 2024—supported by Google, Hudson River Trading, Meta, and others—has enabled him to dedicate more time and focus to improving the quality, accessibility, and accuracy of the Linux man-pages.
Expanding and Modernizing the Man-Pages
Over the last year, Alex’s work has resulted in major improvements that benefit both developers and maintainers across the Linux ecosystem. Highlights include:
Enhanced readability and structure: The SYNOPSIS sections of many pages now include clearer parameter names and array bounds, while large pages such as fcntl(2), futex(2), and keyctl(2) have been refactored into more focused, maintainable units.
Build system improvements: Updates make packaging easier for distributions and introduce new diagnostic checks that help identify inconsistencies across pages.
New documentation for GCC and Clang attributes: These additions reduce the documentation burden on the LLVM project while helping developers better understand compiler-specific features.
Coverage of POSIX.1-2024 and ISO C23 updates: Nearly all recent standard changes have been documented, with more updates in progress.
Developer tools and scripts: Utilities such as diffman-git(1), mansect(1), and pdfman(1) help developers compare versions, extract specific sections, and generate printable documentation. Some are now included by default in major Linux distributions.
Historical preservation: Documentation now includes guidance for producing PDF books of manual pages and the ongoing project of recreating original Unix manuals to compare modern APIs against historical references.
Upstream fixes and contributions: Beyond man-pages, Alex has submitted patches to groff, the Linux kernel, and GCC, and contributed to improving the spatial memory safety of C through the ISO C Committee, including by adding the new _Countof()operator which will continue to evolve in the coming years.
Enabling Sustainability Through Collaboration
The man-pages project continues to be one of the most relied-upon open documentation resources in computing, providing millions of developers with accurate and accessible information directly from the command line. Its continued maintenance is vital to the long-term health of Linux and open source software at large.
In Part One of this series, we examined how the SONiC control plane and the VPP data plane form a cohesive, software-defined routing stack through the Switch Abstraction Interface.
We outlined how SONiC’s Redis-based orchestration and VPP’s user-space packet engine come together to create a high-performance, open router architecture.
In this second part, we’ll turn theory into practice. You’ll see how the architecture translates into a working environment, through a containerized lab setup that connects two SONiC-VPP routers and Linux hosts.
Reconstructing the L3 Routing Demo
Understanding the architecture is foundational, but the true power of this integration becomes apparent through a practical, container-based lab scenario.
The demo constructs a complete L3 routing environment using two SONiC-VPP virtual routers and two Linux hosts, showcasing how to configure interfaces, establish dynamic routing, and verify end-to-end connectivity.
Lab Environment and Topology
The demonstration is built using a containerized lab environment, orchestrated by a tool like Containerlab. This approach allows for the rapid deployment and configuration of a multi-node network topology from a simple declarative file. The topology consists of four nodes:
router1: A SONiC-VPP virtual machine acting as the gateway for the first LAN segment.
router2: A second SONiC-VPP virtual machine, serving as the gateway for the second LAN segment.
PC1: A standard Linux container representing a host in the first LAN segment.
PC2: Another Linux container representing a host in the second LAN segment.
These nodes are interconnected as follows:
An inter-router link connects router1:eth1 to router2:eth1.
PC1 is connected to router1 via PC1:eth2 and router1:eth2.
PC2 is connected to router2 via PC2:eth2 and router2:eth2.
Initial Network Configuration
Once the lab is deployed, a startup script applies the initial L3 configuration to all nodes.
Host Configuration: The Linux hosts, PC1 and PC2, are configured with static IP addresses and routes.
PC1 is assigned the IP address 10.20.1.1/24 and is given a static route for the 10.20.2.0/24 network via its gateway, router1 (10.20.1.254).
PC2 is assigned the IP address 10.20.2.1/24 and is given a static route for the 10.20.1.0/24 network via its gateway, router2 (10.20.2.254).
Router Interface Configuration: The SONiC-VPP routers are configured using the standard SONiC CLI.
router1:
The inter-router interface Ethernet0 is configured with the IP 10.0.1.1/30.
The LAN-facing interface Ethernet4 is configured with the IP 10.20.1.254/24.
router2:
The inter-router interface Ethernet0 is configured with the IP 10.0.1.2/30.
The LAN-facing interface Ethernet4 is configured with the IP 10.20.2.254/24.
After IP assignment, each interface is brought up using the sudo config interface startup command.
Dynamic Routing with BGP
With the interfaces configured, dynamic routing is established between the two routers using the FRRouting suite integrated within SONiC. The configuration is applied via the vtysh shell.
iBGP Peering: An internal BGP (iBGP) session is established between router1 and router2 as they both belong to the same Autonomous System (AS) 65100.
router1 (router-id 10.0.1.1) is configured to peer with router2 at 10.0.1.2.
router2 (router-id 10.0.1.2) is configured to peer with router1 at 10.0.1.1.
Route Advertisement: Each router advertises its connected LAN segment into the BGP session.
router1 advertises the 10.20.1.0/24 network.
router2 advertises the 10.20.2.0/24 network.
This BGP configuration ensures that router1 learns how to reach PC2’s network via router2, and router2 learns how to reach PC1’s network via router1.
Verification and Data Path Analysis
The final phase is to verify that the configuration is working correctly at every layer of the stack.
Control Plane Verification: The BGP session status and learned routes can be checked from within vtysh on either router. On router1, the command show ip bgp summary would confirm an established peering session with router2. The command show ip route would display the route to 10.20.2.0/24 learned via BGP from 10.0.1.2.
Data Plane Verification: To confirm the route has been programmed into the VPP data plane, an operator would access the VPP command-line interface (vppctl) inside the syncd container. The command show ip fib would display the forwarding table, which should include the BGP-learned route to 10.20.2.0/24, confirming that the state has been successfully synchronized from the control plane.
End-to-End Test: The ultimate test is to generate traffic between the hosts. A simple ping 10.20.2.1 from PC1 should succeed. This confirms that the entire data path is functional: PC1 sends the packet to its gateway (router1), router1 performs a lookup in its VPP FIB and forwards the packet to router2, which then forwards it to PC2. The return traffic follows the reverse path, validating the complete, integrated solution.
This practical demonstration, using standard container tooling and declarative configurations, powerfully illustrates the operational simplicity and robustness of the SONiC-VPP architecture for building high-performance, software-defined L3 networks.
Performance Implications and Future Trajectories
The elegance of the SONiC-VPP integration is matched by its impressive performance and its applicability to a wide range of modern networking challenges.
By offloading the data plane from the kernel to a highly optimized user-space framework, this solution unlocks capabilities that are simply unattainable with traditional software-based routing.
The performance gains are impressive.
VPP is consistently benchmarked as being much faster than kernel-based forwarding, with some sources claiming a 10x to 100x improvement in packet processing throughput.2
This enables use cases like “Terabit IPSec” on multi-core COTS servers, a feat that would have been unthinkable just a few years ago.3 Real-world deployments have validated this potential.
A demonstration at the ONE Summit 2024 showcased a SONiC-VPP virtual gateway providing multi-cloud connectivity between AWS and Azure. The performance testing revealed a round-trip time of less than 1 millisecond between application workloads and the cloud provider on-ramps (AWS Direct Connect and Azure ExpressRoute), highlighting its suitability for high-performance, low-latency applications.4
This level of performance opens the door to a variety of demanding use cases:
High-Performance Edge Routing: As a virtual router or gateway, SONiC-VPP can handle massive traffic volumes at the network edge, serving as a powerful and cost-effective alternative to proprietary hardware routers.5
Multi-Cloud and Hybrid Cloud Connectivity: The solution is ideal for creating secure, high-throughput virtual gateways that interconnect on-premises data centers with multiple public clouds, as demonstrated in the ONE Summit presentation.4
Integrated Security Services: The performance of VPP makes it an excellent platform for computationally intensive security functions. Commercial offerings based on this architecture, like AsterNOS-VPP, package the solution as an integrated platform for routing, security (firewall, IPsec VPN, IDS/IPS), and operations.5
While the raw throughput figures are compelling, a more nuanced benefit lies in the nature of the performance itself.
The Linux kernel, for all its power, is a general-purpose operating system. Its network stack is subject to non-deterministic delays, caused by system interrupts, process scheduling, and context switches. This introduces unpredictable latency, which can be detrimental to sensitive applications.12
VPP, by running in user space on dedicated cores and using poll-mode drivers, sidesteps these sources of unpredictability. This provides not just high throughput, but consistent, low-latencyperformance. For emerging workloads at the edge, such as real-time IoT data processing, AI/ML inference, and 5G network functions, this predictable performance is often more critical than raw aggregate bandwidth.16 The key value proposition, therefore, is not just being “fast,” but being “predictably fast.”
The SONiC-VPP project is not static; it is an active area of development within the open-source community.
A key focus for the future is to deepen the integration by extending the SAI API to expose more of VPP’s rich feature set to the SONiC control plane. Currently, SAI primarily covers core L2/L3 forwarding basics.
However, VPP has a vast library of advanced features. Active development efforts are underway to create SAI extensions for features like Network Address Translation (NAT) and advanced VxLAN multi-tenancy capabilities, which would allow these functions to be configured and managed directly through the standard SONiC interfaces.6
A review of pull requests on thesonic-platform-vpp GitHub repository shows ongoing work to add support for complex features like VxLAN BGP EVPN and to improve ACL testing, indicating a healthy and forward-looking development trajectory.6
The Future is Software-Defined and Open
The integration of the SONiC control plane with the VPP data plane is far more than a clever engineering exercise.
It is a powerful testament to the maturity and viability of the disaggregated networking model. This architecture successfully combines the strengths of two of the most significant open-source networking projects, creating a platform that is flexible, performant, and free from the constraints of proprietary hardware.
It proves that the separation of the control and data planes is no longer a theoretical concept but a practical, deployable reality that offers unparalleled architectural freedom.
The synergy between SONiC and FD.io VPP, both flagship projects of the Linux Foundation, highlights the immense innovative power of collaborative, community-driven development.1
This combined effort has produced a solution that fundamentally redefines the router, transforming it from a monolithic hardware appliance into a dynamic, high-performance software application that can be deployed on commodity servers.
Perhaps most importantly, this architecture provides the tools to manage network infrastructure with the same principles that govern modern software development.
As demonstrated by the L3 routing demo’s lifecycle-building from code, configuring with declarative files, and deploying as a versioned artifact, the SONiC-VPP stack paves the way for true NetDevOps. It enables network engineers and operators to embrace automation, version control, and CI/CD pipelines, finally treating network infrastructure as code. 7
In doing so, it delivers on the ultimate promise of software-defined networking – a network that is as agile, scalable, and innovative – as the applications it supports.
The networking industry is undergoing a fundamental architectural transformation, driven by the relentless demands of cloud-scale data centers and the rise of software-defined infrastructure. At the heart of this evolution is the principle of disaggregation: the systematic unbundling of components that were once tightly integrated within proprietary, monolithic systems.
This movement began with the separation of network hardware from the network operating system (NOS), a paradigm shift championed by hyperscalers to break free from vendor lock-in and accelerate innovation.
In this blog post, we will explore how disaggregated networking takes shape, when the SONiC control plane meets the VPP data plane. You’ll see how their integration creates a fully software-defined router – one that delivers ASIC-class performance on standard x86 hardware, while preserving the openness and flexibility of Linux-based systems.
Disaggregation today extends to the software stack, separating the control plane from the data plane. This decoupling enables modular design, independent component selection, and more efficient performance and cost management.
The integration of Software for Open Networking in the Cloud (SONiC) and the Vector Packet Processing (VPP) framework represents the peak of this disaggregated model.
SONiC, originally developed by Microsoft and now a thriving open-source project under the Linux Foundation, has established itself as the de facto standard for a disaggregated NOS, offering a rich suite of L3 routing functionalities hardened in the world’s largest data centers.1 Its core design philosophy is to abstract the underlying switch hardware, allowing a single, consistent software stack to run on a multitude of ASICs from different vendors. This liberates operators from the constraints of proprietary systems and fosters a competitive, innovative hardware ecosystem.
Complementing SONiC’s control plane prowess is VPP, a high-performance, user-space data plane developed by Cisco and now part of the Linux Foundation’s Fast Data Project (FD.io).
VPP’s singular focus is to deliver extraordinary packet processing throughput on commodity commercial-off-the-shelf (COTS) processors. By employing techniques like vector processing and bypassing the traditional kernel network stack, VPP achieves performance levels previously thought to be the exclusive domain of specialized, expensive hardware like ASICs and FPGAs.
The fusion of these two powerful open-source projects creates a new class of network device: a fully software-defined router that combines the mature, feature-rich control plane of SONiC with the blistering-fast forwarding performance of VPP.
This architecture directly addresses a critical industry need for a network platform that is simultaneously programmable, open, and capable of line-rate performance without relying on specialized hardware.
The economic implications are profound. By replacing vertically integrated, vendor-locked routers with a software stack running on standard x86 servers, organizations can fundamentally alter their procurement and operational models. This shift transforms network infrastructure from a capital-expenditure-heavy (CAPEX) model, characterized by large upfront investments in proprietary hardware, to a more agile and scalable operational expenditure (OPEX) model.
The ability to leverage COTS hardware drastically reduces total cost of ownership (TCO) and breaks the cycle of vendor lock-in, democratizing access to high-performance networking and enabling a more dynamic, cost-effective infrastructure strategy.
Deconstructing the Components: A Tale of Two Titans
To fully appreciate the synergy of the SONiC-VPP integration, it is essential to first understand the distinct architectural philosophies and capabilities of each component. While they work together to form a cohesive system, their internal designs are optimized for entirely different, yet complementary, purposes. SONiC is engineered for control, abstraction, and scalability at the management level, while VPP is purpose-built for raw, unadulterated packet processing speed.
SONiC: The Cloud-Scale Control Plane
SONiC is a complete, open-source NOS built upon the foundation of Debian Linux. Its architecture is a masterclass in modern software design, abandoning the monolithic structure of traditional network operating systems in favor of a modular, containerized, microservices-based approach. This design provides exceptional development agility and serviceability.
Key networking functions, such as:
Border Gateway Protocol (BGP) routing stack
Link Layer Discovery Protocol (LLDP)
platform monitoring (PMON)
each run within their own isolated Docker container. This modularity allows individual components to be updated, restarted, or replaced without affecting the entire system, a critical feature for maintaining high availability in large-scale environments.
The central nervous system of this distributed architecture is an in-memory Redis database engine, which serves as the single source of truth for the switch’s state.
Rather than communicating through direct inter-process communication (IPC) or rigid APIs, SONiC’s containers interact asynchronously by publishing and subscribing to various tables within the Redis database. This loosely coupled communication model is fundamental to SONiC’s flexibility. Key databases include:
CONFIG_DB: Stores the persistent, intended configuration of the switch.
APPL_DB: A high-level, application-centric representation of the network state, such as routes and neighbors.
STATE_DB: Holds the operational state of various components.
ASIC_DB: A hardware-agnostic representation of the forwarding plane’s desired state.
The cornerstone of SONiC’s hardware independence, and the very feature that makes the VPP integration possible, is the Switch Abstraction Interface (SAI). SAI is a standardized C API that defines a vendor-agnostic way for SONiC’s software to control the underlying forwarding elements. A dedicated container, syncd, is responsible for monitoring the ASIC_DB. Upon detecting changes, making the corresponding SAI API calls to program the hardware. Each hardware vendor provides a libsai.so library that implements this API, translating the standardized calls into the specific commands required by their ASIC’s SDK. This elegant abstraction allows the entire SONiC control plane to remain blissfully unaware of the specific silicon it is running on.
VPP: The User-Space Data Plane Accelerator
While SONiC manages the high-level state of the network, VPP is singularly focused on the task of moving packets as quickly as possible. As a core component of the FD.io project, VPP is an extensible framework that provides the functionality of a router or switch entirely in software. Its remarkable performance is derived from several key architectural principles.
Vector Processing
The first and most important is vector processing. Unlike traditional scalar processing, where the CPU processes one packet at a time through the entire forwarding pipeline, VPP processes packets in batches, or “vectors”. A vector typically contains up to 256 packets. The entire vector is processed through the first stage of the pipeline, then the second, and so on. This approach has a profound impact on CPU efficiency. The first packet in the vector effectively “warms up” the CPU’s instruction cache (i-cache), loading the necessary instructions for a given task. The subsequent packets in the vector can then be processed using these cached instructions, dramatically reducing the number of expensive fetches from main memory and maximizing the benefits of modern superscalar CPU architectures.
User-Space Orientation & Kernel Bypass
The second principle is user-space operation and kernel bypass. The Linux kernel network stack, while powerful and flexible, introduces performance overheads from system calls, context switching between kernel and user space, and interrupt handling. VPP avoids this entirely by running as a user-space process. It typically leverages the Data Plane Development Kit (DPDK) to gain direct, exclusive access to network interface card (NIC) hardware. Using poll-mode drivers (PMDs), VPP continuously polls the NIC’s receive queues for new packets, eliminating the latency and overhead associated with kernel interrupts. This direct hardware access is a critical component of its high-throughput, low-latency performance profile.
Packet Processing Graph
Finally, VPP’s functionality is organized as a packet processing graph. Each feature or operation-such as an L2 MAC lookup, an IP4 route lookup, or an Access Control List (ACL) check-is implemented as a “node” in a directed graph. Packets flow from node to node as they are processed. This modular architecture makes VPP highly extensible. New networking features can be added as plugins that introduce new graph nodes or rewire the existing graph, without requiring changes to the core VPP engine.
The design of SAI was a stroke of genius, originally intended to abstract the differences between various hardware ASICs.
However, its true power is revealed in its application here. The abstraction is so well-defined, that it can be used to represent not just a physical piece of silicon, but a software process. The SONiC control plane does not know or care whether the entity on the other side of the SAI API is a Broadcom Tomahawk chip or a VPP instance running on an x86 CPU. It simply speaks the standardized language of SAI.
This demonstrates that SAI successfully abstracted away not just the implementation details of a data plane, but the very notion of it being physical, allowing a purely software-based forwarder to be substituted with remarkable elegance.
Feature
SONiC
VPP
Primary Function
Control Plane & Management Plane
Data Plane
Architectural Model
Containerized Microservices
Packet Processing Graph
Key Abstraction
Switch Abstraction Interface (SAI)
Graph Nodes & Plugins
Operating Environment
Kernel/User-space Hybrid (Linux-based)
Pure User-space (Kernel Bypass)
Core Performance Mechanism
Distributed State Management via Redis
Vector Processing & CPU Cache Optimization
Primary Configuration Method
Declarative (config_db.json, Redis)
Imperative (startup.conf, Binary API)
Creating a High-Performance Software Router
The integration of SONiC and VPP is a sophisticated process that transforms two independent systems into a single, cohesive software router.
The architecture hinges on SONiC’s decoupled state management and a clever translation layer that bridges the abstract world of the control plane with the concrete forwarding logic of the data plane. Tracing the lifecycle of a single route update reveals the elegance of this design.
The End-to-End Control Plane Flow
The process begins when a new route is learned by the control plane. In a typical L3 scenario, this happens via BGP.
Route Reception: An eBGP peer sends a route update to the SONiC router. This update is received by the bgpd process, which runs within the BGP container. SONiC leverages the well-established FRRouting (FRR) suite for its routing protocols, so bgpd is the FRR BGP daemon.
RIB Update: bgpd processes the update and passes the new route information to zebra, FRR’s core component that acts as the Routing Information Base (RIB) manager.
Kernel and FPM Handoff: zebra performs two critical actions. First, it injectsa route into the host Linux kernel’s forwarding table – via a Netlink message. Second, it sends the same route information to the fpmsyncd process using the Forwarding Plane Manager (FPM) interface, a protocol designed for pushing routing updates from a RIB manager to a forwarding plane agent.
Publishing to Redis: The fpmsyncd process acts as the first bridge between the traditional routing world and SONiC’s database-centric architecture. It receives the route from zebra and writes it into the APPL_DB table in the Redis database. At this point, the route has been successfully onboarded into the SONiC ecosystem.
Orchestration and Translation: The Orchestration Agent (orchagent), a key process within the Switch State Service (SWSS) container, is constantly subscribed to changes in the APPL_DB. When it sees the new route entry, it performs a crucial translation. It converts the high-level application intent (“route to prefix X via next-hop Y”) into a hardware-agnostic representation and writes this new state to the ASIC_DB table in Redis.
Synchronization to the Data Plane: The final step in the SONiC control plane is handled by the syncd container. This process subscribes to the ASIC_DB. When it detects the new route entry created by orchagent, it knows it must program this state into the underlying forwarding plane.
This entire flow is made possible by the architectural decision to use Redis as a central, asynchronous message bus.
In a traditional, monolithic NOS, the BGP daemon might make a direct, tightly coupled function call to a forwarding plane driver. This creates brittle dependencies. SONiC’s pub/sub model, by contrast, ensures that each component is fully decoupled. The BGP container’s only responsibility is to publish routes to the APPL_DB; it has no knowledge of who will consume that information.
This allows the final consumer the data plane-to be swapped out with zero changes to any of the upstream control plane components. This decoupled architecture is what allows VPP to be substituted for a hardware ASIC so cleanly and implies that other data planes could be integrated in the future – simply by creating a new SAI implementation.
The Integration Foundation: libsaivpp.so
The handoff from syncd to the data plane is where the specific SONiC-VPP integration occurs.
In a standard SONiC deployment on a physical switch, the syncd container would be loaded with a vendor-provided shared library (e.g., libsai_broadcom.so). When syncd reads from the ASIC_DB, it calls the appropriate standardized SAI API function (e.g., sai_api_route->create_route_entry()), and the vendor library translates this into proprietary SDK calls, to program the physical ASIC.
In the SONiC-VPP architecture, this vendor library is replaced by a purpose-built shared library: libsaivpp.so. This library is the critical foundationof the entire system. It implements the full SAI API, presenting the exact same interface tosyncd as any hardware SAI library would.
However, its internal logic is completely different. When syncd calls a function like create_route_entry(), libsaivpp.so does not communicate with a hardware driver. Instead, it translates the SAI object and its attributes into a binary API message that the VPP process understands.
It then sends this message to the VPP engine, instructing it to add the corresponding entry to its software forwarding information base (FIB). This completes a “decision-to-execution” loop, bridging SONiC’s abstract control plane with VPP’s high-performance software data plane.
Component (Container)
Key Process(es)
Role in Integration
BGP Container
bgpd
Receives BGP updates from external peers using the FRRouting stack.
SWSS Container
zebra, fpmsyncd
zebra manages the RIB. fpmsyncd receives route updates from zebra and publishes them to the Redis APPL_DB.
Database Container
redis-server
Acts as the central, asynchronous message bus for all SONiC components. Hosts the APPL_DB and ASIC_DB.
SWSS Container
orchagent
Subscribes to APPL_DB, translates application intent into a hardware-agnostic format, and publishes it to the ASIC_DB.
Syncd Container
syncd
Subscribes to ASIC_DB and calls the appropriate SAI API functions to program the data plane.
VPP Platform
libsaivpp.so
The SAI implementation for VPP. Loaded by syncd, it translates SAI API calls into VPP binary API messages.
VPP Process
vpp
The user-space data plane. Receives API messages from libsaivpp.so and programs its internal forwarding tables accordingly.
In the second part of our series, we will move from architecture to action – building and testing a complete SONiC-VPP software router in a containerized lab.
We’ll configure BGP routing, verify control-to-data planesynchronization, and analyze performance benchmarks that showcase the real-world potential of this disaggregated design.
When teams consider deploying Kubernetes, one of the first questions that arises is: where should it run? The default answer is often the public cloud, thanks to its flexibility and ease of use. However, a growing number of organizations are revisiting the advantages of running Kubernetes directly on bare metal servers. For workloads that demand maximum performance, predictable latency, and direct hardware access, bare metal Kubernetes can achieve results that virtualized or cloud-hosted environments simply cannot match.
Why Bare Metal Still Matters
Virtualization and cloud abstractions have delivered convenience, but they also introduce overhead. By eliminating the virtualization layer, applications gain direct access to CPUs, memory, storage devices, and network interfaces. This architectural difference translates into tangible benefits:
Near-Native Performance – Applications can leverage the full power of the hardware, experiencing minimal overhead from hypervisors or cloud APIs. (Cloud Native Bare Metal Report, CNCF 2023)
Predictable Latency – A critical factor in industries such as real-time analytics, telecommunications, and financial trading, where even microseconds matter.
Efficient Hardware Utilization – GPUs, NVMe storage, or SmartNICs can be accessed directly, without restrictions or performance bottlenecks introduced by virtualization.
Cost Optimization – For workloads that are steady and long-term, owning and operating bare metal servers can be significantly more cost-effective than continuously paying cloud provider bills (IDC: Bare Metal Economics).
Deep Infrastructure Control – Operators can configure firmware, tune networking, and manage storage directly, without depending on the abstractions and limitations imposed by cloud environments.
Bare metal provides power and control, but it comes with its own challenge: managing servers at scale. This is precisely where Bare Metal as a Service (BMaaS) steps in.
Bare Metal as a Service with metal-stack.io
metal-stack is an open-source platform that makes bare metal infrastructure as easy to consume as cloud resources. It provides a self-service model for physical servers, automating provisioning, networking, and lifecycle management. Essentially, it transforms racks of hardware into a cloud-like environment—while retaining the performance advantages of bare metal.
Automated Provisioning – Servers can be deployed with clean, reproducible operating system images, similar to how VMs are created in cloud environments.
Integrated Networking – With BGP-based routing and compatibility with Kubernetes CNI plugins like Cilium or Calico, metal-stack ensures high-performance and secure networking. Load balancing can be handled with MetalLB.
Multi-Tenant Support – Physical machines can be securely assigned to different teams or projects, enabling isolation and resource fairness.
Open Source Foundation – The entire stack is open source (MIT/AGPL), ensuring transparency, avoiding vendor lock-in, and allowing teams to adapt the system to their unique needs.
By using metal-stack.io, organizations don’t need to compromise between the raw speed of bare metal and the automation of cloud infrastructure—they can have both.
Building the Bare Metal Kubernetes Stack
Deploying Kubernetes on bare metal requires assembling several components into a complete ecosystem. With metal-stack at the foundation, additional layers ensure resilience, security, and operational visibility:
Networking – Pair metal-stack’s BGP routing with a Kubernetes CNI like Cilium for low-latency, policy-driven communication.
Storage – Tools like Rook (Ceph) or OpenEBS create distributed, high-speed storage pools that can survive node failures.
Observability – Monitoring with Prometheus, and logging with Loki or ELK, provide the insights needed to manage both hardware and workloads effectively.
Security – Without the isolation of virtualization, it becomes essential to enforce RBAC, Pod Security Standards, and strict network policies.
Lifecycle Management – While metal-stack automates the server lifecycle, Kubernetes operators and GitOps tools (e.g., ArgoCD or Flux) help automate application deployment and ongoing operations.
This layered approach turns bare metal clusters into production-ready platforms capable of handling enterprise-grade workloads.
Real-World Use Cases
Bare metal Kubernetes shines in scenarios where hardware performance and low latency are non-negotiable. Some standout use cases include:
AI/ML Training – Direct access to GPUs accelerates machine learning model training and inference workloads (NVIDIA on Bare Metal).
Telecom & 5G Networks – Edge deployments and network functions demand ultra-low latency and predictable performance.
Financial Services – High-frequency trading and other time-sensitive platforms benefit from microsecond-level predictability.
Enterprise Databases – Systems like PostgreSQL or Cassandra achieve higher throughput and stability when running directly on bare metal.
In each of these cases, bare metal Kubernetes provides both the performance edge and the flexibility of modern orchestration.
Getting Started with metal-stack.io
For organizations interested in exploring this model, the path forward is straightforward:
Benchmark workloads against equivalent cloud-based environments to validate performance gains.
Scale gradually, adding automation and expanding infrastructure as the needs grow.
This incremental approach reduces risk and allows teams to build confidence before moving critical workloads.
Conclusion & Next Steps
Running Kubernetes on bare metal delivers unmatched performance, efficiency, and control—capabilities that virtualized and cloud-based environments cannot fully replicate. Thanks to open-source solutions like metal-stack.io, organizations no longer need to choose between raw power and operational simplicity. Bare Metal as a Service (BMaaS) extends the agility of the cloud to physical servers, enabling DevOps teams to manage Kubernetes clusters that are faster, more predictable, and fully under their control.
For high-performance computing, latency-sensitive applications, and hardware-intensive workloads, Kubernetes on bare metal is not just an alternative—it is often the best choice.
Achieving and maintaining compliance with regulatory frameworks can be challenging for many organizations. Managing security controls manually often leads to excessive use of time and resources, leaving less available for strategic initiatives and business growth.
Standards such as CMMC, HIPAA, PCI DSS, SOC2 and GDPR demand ongoing monitoring, detailed documentation, and rigorous evidence collection. Solutions like UTMStack, an open source Security Information and Event Management (SIEM) and Extended Detection and Response (XDR) solution, streamlines this complex task by leveraging its built-in log centralization, correlation, and automated compliance evaluation capabilities. This article explores how UTMStack simplifies compliance management by automating assessments, continuous monitoring, and reporting.
Understanding Compliance Automation with UTMStack
UTMStack inherently centralizes logs from various organizational systems, placing it in an ideal position to dynamically assess compliance controls. By continuously processing real-time data, UTMStack automatically evaluates compliance with critical controls. For instance, encryption usage, implementation of two-factor authentication (2FA) and user activity auditing among many others can be evaluated automatically.
Figure 1: Automated evaluation of Compliance framework controls.
Example Compliance Control Evaluations:
Encryption Enforcement: UTMStack continuously monitors logs to identify instances where encryption is mandatory (e.g., data in transit or at rest). It evaluates real-time compliance status by checking log events to confirm whether encryption protocols such as TLS are actively enforced and alerts administrators upon detection of potential non-compliance. The following event, for example would trigger an encryption control failure:
“message”: [{“The certificate received from the remote server was issued by an untrusted certificate authority. Because of this, none of the data contained in the certificate can be validated. The TLS connection request has failed. The attached data contains the server certificate”.}]
Two-Factor Authentication (2FA): By aggregating authentication logs, UTMStack detects whether 2FA policies are consistently enforced across the enterprise. Compliance is assessed in real-time, and automated alerts are generated if deviations occur, allowing immediate remediation. Taking Office365 as an example, the following log would confirm the use of 2FA in a given use authentication attempt:
User Activity Auditing: UTMStack processes comprehensive activity logs from applications and systems, enabling continuous auditing of user and devices actions. This includes monitoring privileged account usage, data access patterns, and identifying anomalous behavior indicative of compliance risks. This is a native function of UTMSatck and automatically checks the control if the required integrations are configured.
No-Code Compliance Automation Builder
One of UTMStack’s standout features is its intuitive, no-code compliance automation builder. Organizations can easily create custom compliance assessments and automated monitoring workflows tailored to their unique regulatory requirements without any programming experience. This flexibility empowers compliance teams to build bespoke compliance frameworks rapidly that update themselves and send reports on a schedule.
Figure 2: Compliance Framework Builder with drag and drop functionality.
Creating Custom Compliance Checks
UTMStack’s no-code interface allows users to:
Define custom compliance control logic visually.
Establish automated real-time monitoring of specific compliance conditions.
Generate and schedule tailored compliance reports.
This approach significantly reduces the administrative overhead, enabling compliance teams to respond swiftly to evolving regulatory demands.
Unified Compliance Management and Integration
Beyond automation, UTMStack serves as a centralized compliance dashboard, where controls fulfilled externally can be manually declared compliant within the platform. This unified “pane of glass” ensures that all compliance assessments—automated and manual—are consolidated into one comprehensive view, greatly simplifying compliance audits.
Moreover, UTMStack offers robust API capabilities, facilitating easy integration with existing Governance, Risk, and Compliance (GRC) tools, allowing seamless data exchange and further enhancing compliance management.
Sample Use Case: CMMC Automation
For CMMC compliance, organizations must demonstrate rigorous data security, availability, processing integrity, confidentiality, and privacy practices. UTMStack automatically evaluates controls related to these areas by analyzing continuous log data, such as firewall configurations, user access patterns, and audit trails.
Automated reports clearly detail compliance status, including specific control numbers and levels, enabling organizations to proactively address potential issues, dramatically simplifying CMMC assessments and future audits.
Figure 3: CMMC Compliance Control details
Compliance Control Evidence Remediation
When a framework control is identified as compliant, UTMStack automatically gathers the necessary evidence to demonstrate compliance. This evidence includes logs extracted from source systems and a dedicated, interactive dashboard for deeper exploration and analysis. Conversely, if the control evaluation identifies non-compliance, UTMStack employs an AI-driven technique known as Retrieval-Augmented Generation to provide remediation steps to security analysts and system engineers.
Compliance controls for each framework are not only evaluated but also provide dashboards for better understanding and navigation:
Figure 4: Compliance automation dashboards.
API-First Compliance Integration
UTMStack’s API-first approach enables compliance automation workflows to integrate effortlessly into existing IT ecosystems. Organizations leveraging various GRC platforms can easily synchronize compliance data, automate reporting, and centralize compliance evidence, thus minimizing manual data handling and significantly improving accuracy and efficiency.
Summary
Compliance management doesn’t have to be complicated or resource-draining. UTMStack’s open source SIEM and XDR solution simplifies and automates compliance with major standards such as CMMC, HIPAA, PCI DSS, SOC2, GDPR, and GLBA. By continuously monitoring logs, dynamically assessing compliance controls, and providing a user-friendly, no-code automation builder, UTMStack dramatically reduces complexity and enhances efficiency.
Organizations can easily customize and automate compliance workflows, maintain continuous monitoring, and integrate seamlessly with existing compliance tools, making UTMStack an invaluable resource for streamlined compliance management.
Join Our Community
We’re continuously improving UTMStack and welcome contributions from the cybersecurity and compliance community.
GitHub Discussions: Explore our codebase, submit issues, or contribute enhancements.
Discord Channel: Engage with other users, share ideas, and collaborate on improvements.
Your participation helps shape the future of compliance automation. Join us today!
Talos Linux is a specialized operating system designed for running Kubernetes. First and foremost it handles full lifecycle management for Kubernetes control-plane components. On the other hand, Talos Linux focuses on security, minimizing the user’s ability to influence the system. A distinctive feature of this OS is the near-complete absence of executables, including the absence of a shell and the inability to log in via SSH. All configuration of Talos Linux is done through a Kubernetes-like API.
Talos Linux is provided as a set of pre-built images for various environments.
The standard installation method assumes you will take a prepared image for your specific cloud provider or hypervisor and create a virtual machine from it. Or go the bare metal route and load the Talos Linux image using ISO or PXE methods.
Unfortunately, this does not work when dealing with providers that offer a pre-configured server or virtual machine without letting you upload a custom image or even use an ISO for installation through KVM. In that case, your choices are limited to the distributions the cloud provider makes available.
Usually during the Talos Linux installation process, two questions need to be answered: (1) How to load and boot the Talos Linux image, and (2) How to prepare and apply the machine-config (the main configuration file for Talos Linux) to that booted image. Let’s talk about each of these steps.
Booting into Talos Linux
One of the most universal methods is to use a Linux kernel mechanism called kexec.
kexec is both a utility and a system call of the same name. It allows you to boot into a new kernel from the existing system without performing a physical reboot of the machine. This means you can download the required vmlinuz and initramfs for Talos Linux, and then, specify the needed kernel command line and immediately switch over to the new system. It is as if the kernel were loaded by the standard bootloader at startup, only in this case your existing Linux operating system acts as the bootloader.
Essentially, all you need is any Linux distribution. It could be a physical server running in rescue mode, or even a virtual machine with a pre-installed operating system. Let’s take a look at a case using Ubuntu on, but it can be literally any other Linux distribution.
Log in via SSH and install the kexec-tools package, it contains the kexec utility, which you’ll need later:
apt install kexec-tools -y
Next, you need to download the Talos Linux, that is the kernel and initramfs. They can be downloaded from the official repository:
If you have a physical server rather than a virtual one, you’ll need to build your own image with all the necessary firmware using Talos Factory service. Alternatively, you can use the pre-built images from the Cozystack project (a solution for building clouds we created at Ænix and transferred to CNCF Sandbox) – these images already include all required modules and firmware:
Now you need the network information that will be passed to Talos Linux at boot time. Below is a small script that gathers everything you need and sets environment variables:
You can pass these parameters via the kernel cmdline. Use ip= parameter to configure the network using the Kernel level IP configuration mechanism for this. This method lets the kernel automatically set up interfaces and assign IP addresses during boot, based on information passed through the kernel cmdline. It’s a built-in kernel feature enabled by the CONFIG_IP_PNP option. In Talos Linux, this feature is enabled by default. All you need to do is provide a properly formatted network settings in the kernel cmdline.
The first command loads the Talos kernel into RAM, the second command switches the current system to this new kernel.
As a result, you’ll get a running instance of Talos Linux with networking configured. However it’s currently running entirely in RAM, so if the server reboots, the system will return to its original state (by loading the OS from the hard drive, e.g., Ubuntu).
Applying machine-config and installing Talos Linux on disk
To install Talos Linux persistently on the disk and replace the current OS, you need to apply a machine-config specifying the disk to install. To configure the machine, you can use either the official talosctl utility or the Talm, utility maintained by the Cozystack project (Talm works with vanilla Talos Linux as well).
First, let’s consider configuration using talosctl. Before applying the config, ensure it includes network settings for your node; otherwise, after reboot, the node won’t configure networking. During installation, the bootloader is written to disk and does not contain the ip option for kernel autoconfiguration.
Here’s an example of a config patch containing the necessary values:
When you have a lot of configs, you’ll want a convenient way to manage them. This is especially useful with bare-metal nodes, where each node may have different disks, interfaces and specific network settings. As a result, you might need to hold a patch for each node.
To solve this, we developed Talm — a configuration manager for Talos Linux that works similarly to Helm.
The concept is straightforward: you have a common config template with lookup functions, and when you generate a configuration for a specific node, Talm dynamically queries the Talos API and substitutes values into the final config.
Talm includes almost all of the features of talosctl, adding a few extras. It can generate configurations from Helm-like templates, and remember the node and endpoint parameters for each node in the resulting file, so you don’t have to specify these parameters every time you work with a node.
Let me show how to perform the same steps to install Talos Linux using Talm:
First, initialize a configuration for a new cluster:
Talm automatically detects the node address and endpoint from the “modeline” (a conditional comment at the top of the file) and applies the config.
You can also run other commands in the same way without specifying node address and endpoint options. Here are a few examples:
View the node status using the built-in dashboard command:
talm dashboard -f nodes/node1.yaml
Bootstrap etcd cluster on node1:
talm bootstrap -f nodes/node1.yaml
Save the kubeconfig to your current directory:
talm kubeconfig kubeconfig -f nodes/node1.yaml
Unlike the official talosctl utility, the generated configs do not contain secrets, allowing them to be stored in git without additional encryption. The secrets are stored at the root of your project and only in these files: secrets.yaml, talosconfig, and kubeconfig.
Summary
That’s our complete scheme for installing Talos Linux in nearly any situation. Here’s a quick recap:
Use kexec to run Talos Linux on any existing system.
Make sure the new kernel has the correct network settings, by collecting them from the current system and passing via the ip parameter in the cmdline. This lets you connect to the newly booted system via the API.
When the kernel is booted via kexec, Talos Linux runs entirely in RAM. To install Talos on disk, apply your configuration using either talosctl or Talm.
When applying the config, don’t forget to specify network settings for your node, because on-disk bootloader configuration doesn’t automatically have them.
Enjoy your newly installed and fully operational Talos Linux.
Exciting news! The Tazama project is officially a Digital Public Good having met the criteria to be accepted to the Digital Public Goods Alliance !
Tazama is a groundbreaking open source software solution for real-time fraud prevention, and offers the first-ever open source platform dedicated to enhancing fraud management in digital payments.
Historically, the financial industry has grappled with proprietary and often costly solutions that have limited access and adaptability for many, especially in developing economies. This challenge is underscored by the Global Anti-Scam Alliance, which reported that nearly $1 trillion was lost to online fraud in 2022.
Tazama represents a significant shift in how financial monitoring and compliance have been approached globally, challenging the status quo by providing a powerful, scalable, and cost-effective alternative that democratizes access to advanced financial monitoring tools that can help combat fraud.
Tazama addresses key concerns of government, civil society, end users, industry bodies, and the financial services industry, including fraud detection, AML Compliance, and the cost-effective monitoring of digital financial transactions. The solution’s architecture emphasizes data sovereignty, privacy, and transparency, aligning with the priorities of governments worldwide. Hosted by LF Charities, which will support the operation and function of the project, Tazama showcases the scalability and robustness of open source solutions, particularly in critical infrastructure like national payment switches.
We are thrilled to be counted alongside many other incredible open source projects working to achieve the United Nations Sustainable Development Goals. For more information, visit the Digital Public Goods Alliance Registry.
A Consortium of Companies and Non Profit Organizations Collaborating to Create an Open Source Software Stack to Advance a Plurality of Interoperable Wallets
DUBLIN—September 13, 2022—The Linux Foundation, a global nonprofit organization enabling innovation through open source, today announced the intention to form the OpenWallet Foundation (OWF), a new collaborative effort to develop open source software to support interoperability for a wide range of wallet use cases. The initiative already benefits from strong support including leading companies across technology, public sector, and industry vertical segments, and standardization organizations.
The mission of the OWF is to develop a secure, multi-purpose open source engine anyone can use to build interoperable wallets. The OWF aims to set best practices for digital wallet technology through collaboration on open source code for use as a starting point for anyone who strives to build interoperable, secure, and privacy-protecting wallets.
The OWF does not intend to publish a wallet itself, nor offer credentials or create any new standards. The community will focus on building an open source software engine that other organizations and companies can leverage to develop their own digital wallets. The wallets will support a wide variety of use cases from identity to payments to digital keys and aim to achieve feature parity with the best available wallets.
Daniel Goldscheider, who started the initiative, said, “With the OpenWallet Foundation we push for a plurality of wallets based on a common core. I couldn’t be happier with the support this initiative has received already and the home it found at the Linux Foundation.”
Linux Foundation Executive Director Jim Zemllin said, “We are convinced that digital wallets will play a critical role for digital societies. Open software is the key to interoperability and security. We are delighted to host the OpenWallet Foundation and excited for its potential.”
OpenWallet Foundation will be featured in a keynote presentation at Open Source Summit Europe on 14 September 2022 at 9:00 AM IST (GMT +1) and a panel at 12:10 PM IST (GMT +1). In order to participate virtually and/or watch the sessions on demand, you can register here.
Pramod Varma, Chief Architect Aadhaar & India Stack, said, “Verifiable credentials are becoming an essential digital empowerment tool for billions of people and small entities. India has been at the forefront of it and is going all out to convert all physical certificates into digitally verifiable credentials via the very successful Digilocker system. I am very excited about the OWF effort to create an interoperable and open source credential wallet engine to supercharge the credentialing infrastructure globally.”
“Universal digital wallet infrastructure will create the ability to carry tokenized identity, money, and objects from place to place in the digital world. Massive business model change is coming, and the winning digital business will be the one that earns trust to directly access the real data in our wallets to create much better digital experiences,” said David Treat, Global Metaverse Continuum Business Group & Blockchain lead, Accenture. “We are excited to be part of the launch and development of an open-source basis for digital wallet infrastructure to help ensure consistency, interoperability, and portability with privacy, security, and inclusiveness at the core by design.”
Drummond Reed, Director of Trust Services at Avast, a brand of NortonLifeLock, said, “We’re on a mission to protect digital freedom for everyone. Digital freedom starts with the services used by the individual and the ability to reclaim their personal information and reestablish trust in digital exchanges. Great end point services start with the core of digital identity wallet technology. We are proud to be a founding supporter of the OpenWallet Foundation because collaboration, interoperability, and open ecosystems are essential to the trusted digital future that we envision.”
“The mobile wallet industry has seen significant advances in the last decade, changing the way people manage and spend their money, and the tasks that these wallets can perform have rapidly expanded. Mobile wallets are turning into digital IDs and a place to store documents whereby the security requirements are further enhanced,” said Taka Kawasaki CoFounder of Authlete Inc. “We understand the importance of standards that ensure interoperability as a member of the OpenID Foundation and in the same way we are excited to work with the Linux Foundation to develop a robust implementation to ensure the highest levels in security.”
“Providing secure identity and validated credential services are key for enabling a high assurance health care service. The OpenWallet Foundation could contribute a key role in promoting the deployment of highly effective secure digital health care systems that benefits the industry,” said Robert Samuel, Executive Director of Technology Research & Innovation, CVS Health.
“Daon provides the digital identity verification/proofing and authentication technology that enables digital trust at scale and on a global basis”, said Conor White, President – Americas at Daon, “Our experience with VeriFLY demonstrated the future importance of digital wallets for consumers and we look forward to supporting the OpenWallet Foundation.”
“We are building and issuing wallets for decentralized identity applications for several years now. Momentum and interest for this area has grown tremendously, far beyond our own community. It is now more important than ever that a unified wallet core embracing open standards is created, with the ambition to become the global standard. The best industry players are pulling together under the OpenWallet Foundation. esatus AG is proud to be among them as experience, expertise, and technology contributor,” said Dr. Andre Kudra, CIO, esatus AG
Kaliya Young, Founder & Principal, Identity Woman in Business, said, “As our lives become more and more digital, it is critical to have strong and interoperable digital wallets that can properly safeguard our digital properties, whether it is our identities, data, or money. We are very excited to see the emergence of the OpenWallet Foundation, particularly its mission to bring key stakeholders together to create a core wallet engine (instead of another wallet) that can empower the actual wallet providers to build better products at lower cost. We look forward to supporting this initiative by leveraging our community resources and knowledge/expertise to develop a truly collaborative movement.”
Masa Mashita, Senior Vice President, Strategic Innovations, JCB Co., Ltd. said, “Wallets for the identity management as well as the payment will be a key function for the future user interface. The concept of OpenWallet will be beneficial for the interoperability among multiple industries and jurisdictions.”
“Secure and open wallets will allow individuals the world over to store, combine and use their credentials in new ways – allowing them to seamlessly assert their identity, manage payments, access services, etc., and empower them with control of their data. This brings together many of our efforts in India around identity, payments, credentials, data empowerment, health, etc. in an open manner, and will empower billions of people around the world,” said Sanjay Jain, Chairman of the Technology Committee of MOSIP.
“The Open Identity Exchange (OIX) welcomes and supports the creation of the OpenWallet Foundation. The creation of open source components that will allow wallet providers to work to standards and trust framework policies in a consistent way is entirely complementary to our own work on open and interoperable Digital Identities. OIX’s Global Interoperability working group is already defining a ‘trust framework policy characteristics methodology,’ as part of our contribution to GAIN. This will allow any trust framework to systematically describe itself to an open wallet, so that a ‘smart wallet’ can seamlessly adapt to the rules of a new framework within which the user wants to assert credentials,” said Nick Mothershaw, Chief Identity Strategist, OIX.
“Okta’s vision is to enable anyone to safely use any technology”, says Randy Nasson, Director of Product Management at Okta. “Digital wallets are emerging as go-to applications for conducting financial transactions, providing identity and vital data, and storing medical information such as vaccination status. Wallets will expand to include other credentials, including professional and academic certifications, membership status, and more. Digital credentials, including their issuance, storage in wallets, and presentation, will impact the way humans authenticate and authorize themselves with digital systems in the coming decade. Okta is excited about the efforts of the OpenWallet Foundation and the Linux Foundation to provide standards-based, open wallet technology for developers and organizations around the world.”
“The OpenID Foundation welcomes the formation of the OpenWallet Foundation and its efforts to create an open-source implementation of open and interoperable technical standards, certification and best practices.” – Nat Sakimura, Chairman, OpenID Foundation.
“We believe the future of online trust and privacy starts with a system for individuals to take control over their digital identity, and interoperability will create broad accessibility,” says Rakesh Thaker, Chief Development Officer at Ping Identity. “We intend to actively participate and contribute to creating common specifications for secure, robust credential wallets to empower people with control over when and with whom they share their personal data.”
Wallet technologies that are open and interoperable are a key factor in enabling citizens to protect their privacy in the digital world. At polypoly – an initiative backed by the first pan-European cooperative for data – we absolutely believe that privacy is a human right! We are already working on open source wallets and are excited to collaborate with others and to contribute to the OpenWallet Foundation,” said Lars Eilebrecht, CISO, polypoly.
“Digital credentials and the wallets that manage them form the trust foundation of a digital society. With the future set to be characterised by a plurality of wallets and underlying standards, broad interoperability is key to delivering seamless digital interactions for citizens. Procivis is proud to support the efforts of the OpenWallet Foundation to build a secure, interoperable, and open wallet engine which enables every individual to retain sovereignty over their digital identities,” Daniel Gasteiger, Chief Executive Officer, Procivis AG.
“It is essential to cross the boundaries between humans, enterprises, and systems to create value in a fully connected world. There is an urgent need for a truly portable, interoperable identity & credentialing backbone for all digital-first processes in government, business, peer-to-peer, smart city systems, and the Metaverse. The OpenWallet Foundation will establish high-quality wallet components that can be assembled into SW solutions unlocking a new universe of next-level digitization, security, and compliance,” said Dr. Carsten Stöcker, CEO Spherity & Chairman of the Supervisory Board IDunion SCE.
“Transmute has long promoted open source standards as the foundation for building evolved solutions that challenge the status quo. Transmute believes any organization should be empowered to create a digital wallet that can securely manage identifiers, credentials, currencies, and payments while complying with regulatory requirements regarding trusted applications and devices. Transmute supports a future of technology that will reflect exactly what OpenWallet Foundation wants to achieve: one that breaks with convention to foster innovation in a secure, interoperable way, benefitting competitive companies, consumers, and developers alike,” said Orie Steele, Co-Founder and CTO of Transmute.
“The Trust Over IP (ToIP) Foundation is proud to support the momentum of an industry-wide open-source engine for digital wallets. We believe this can be a key building block in our mission to establish an open standard trust layer for the Internet. We look forward to our Design Principles and Reference Architecture benefitting this endeavor and collaborating closely with this new Linux Foundation project,” said Judith Fleenor, Director of Strategic Engagement, Trust Over IP Foundation.
For more information about the project and how to participate in this work, please visit: openwallet.foundation.
About the Linux Foundation
Founded in 2000, the Linux Foundation and its projects are supported by more than 3,000 members. The Linux Foundation is the world’s leading home for collaboration on open source software, hardware, standards, and data. Linux Foundation projects are critical to the world’s infrastructure including Linux, Kubernetes, Node.js, ONAP, Hyperledger, RISC-V, PyTorch, and more. The Linux Foundation’s methodology focuses on leveraging best practices and addressing the needs of contributors, users, and solution providers to create sustainable models for open collaboration. For more information, please visit us at linuxfoundation.org.
###
The Linux Foundation has registered trademarks and uses trademarks. For a list of trademarks of The Linux Foundation, please see our trademark usage page: https://www.linuxfoundation.org/trademark-usage. Linux is a registered trademark of Linus Torvalds.
Media Contact:
Dan Whiting for the Linux Foundation +1 202-531-9091 dwhiting@linuxfoundation.org
Today we are more than thrilled to welcome PyTorch to the Linux Foundation. Honestly, it’s hard to capture how big a deal this is for us in a single post but I’ll try.
TL;DR — PyTorch is one of the most important and successful machine learning software projects in the world today. We are excited to work with the project maintainers, contributors and community to transition PyTorch to a neutral home where it can continue to enjoy strong growth and rapid innovation. We are grateful to the team at Meta, where PyTorch was incubated and grew into a massive ecosystem, for trusting the Linux Foundation with this crucial effort. The journey will be epic.
The AI Imperative, Open Source and PyTorch
Artificial Intelligence, Machine Learning, and Deep Learning are critical to present and future technology innovation. Growth around AI and ML communities and the code they generate has been nothing short of extraordinary. AI/ML is also a truly “open source-first” ecosystem. The majority of popular AI and ML tools and frameworks are open source. The community clearly values transparency and the ethos of open source. Open source communities are playing and will play a leading role in development of the tools and solutions that make AI and ML possible — and make it better over time.
For all of the above reasons, the Linux Foundation understands that fostering open source in AI and ML is a key priority. The Linux Foundation already hosts and works with many projects that are either contributing directly to foundational AI/ML projects (LF AI & Data) or contributing to their use cases and integrating with their platforms. (e.g., LF Networking, AGL, Delta Lake, RISC-V, CNCF, Hyperledger).
PyTorch extends and builds on these efforts. Obviously, PyTorch is one of the most important foundational platforms for development, testing and deployment of AI/ML and Deep Learning applications. If you need to build something in AI, if you need a library or a module, chances are there is something in PyTorch for that. If you peel back the cover of any AI application, there is a strong chance PyTorch is involved in some way. From improving the accuracy of disease diagnosis and heart attacks, to machine learning frameworks for self-driving cars, to image quality assessment tools for astronomers, PyTorch is there.
Originally incubated by Meta’s AI team, PyTorch has grown to include a massive community of contributors and users under their community-focused stewardship. The genius of PyTorch (and a credit to its maintainers) is that it is truly a foundational platform for so much AI/ML today, a real Swiss Army Knife. Just as developers built so much of the technology we know today atop Linux, the AI/ML community is building atop PyTorch – further enabling emerging technologies and evolving user needs. As of August 2022, PyTorch was one of the five-fastest growing open source software communities in the world alongside the Linux kernel and Kubernetes. From August 2021 through August 2022, PyTorch counted over 65,000 commits. Over 2,400 contributors participated in the effort, filing issues or PRs or writing documentation. These numbers place PyTorch among the most successful open source projects in history.
Neutrality as a Catalyst
Projects like PyTorch that have the potential to become a foundational platform for critical technology benefit from a neutral home. Neutrality and true community ownership are what has enabled Linux and Kubernetes to defy expectations by continuing to accelerate and grow faster even as they become more mature. Users, maintainers and the community begin to see them as part of a commons that they can rely on and trust, in perpetuity. By creating a neutral home, the PyTorch Foundation, we are collectively locking in a future of transparency, communal governance, and unprecedented scale for all.
As part of the Linux Foundation, PyTorch and its community will benefit from our many programs and support communities like training and certification programs (we already have one in the works), to community research (like our Project Journey Reports) and, of course, community events. Working inside and alongside the Linux Foundation, the PyTorch community also has access to our LFX collaboration portal, enabling mentorships and helping the PyTorch community identify future leaders, find potential hires, and observe shared community dynamics.
PyTorch has gotten to its current state through sound maintainership and open source community management. We’re not going to change any of the good things about PyTorch. In fact, we can’t wait to learn from Meta and the PyTorch community to improve the experiences and outcomes of other projects in the Foundation. For those wanting more insight about our plans for the PyTorch Foundation, I invite you to join Soumith Chintala (co-creator of PyTorch) and Dr. Ibrahim Haddad (Executive Director of the PyTorch Foundation) for a live discussion on Thursday entitled, PyTorch: A Foundation for Open Source AI/ML.
We are grateful for Meta’s trust in “passing us the torch” (pun intended). Together with the community, we can build something (even more) insanely great and add to the global heritage of invaluable technology that underpins the present and the future of our lives. Welcome, PyTorch! We can’t wait to get started!
PyTorch Foundation to foster an ecosystem of vendor-neutral projects alongside founding members AMD, AWS, Google Cloud, Meta, Microsoft Azure, and NVIDIA
DUBLIN – September 12, 2022 – The Linux Foundation, a global nonprofit organization enabling innovation through open source, today announced PyTorch is moving to the Linux Foundation from Meta where it will live under the newly-formed PyTorch Foundation. Since its release in 2016, over 2400 contributors and 18,0000 organizations have adopted the PyTorch machine learning framework for use in academic research and production environments. The Linux Foundation will work with project maintainers, its developer community, and initial founding members of PyTorch to support the ecosystem at its new home.
Projects like PyTorch—that have the potential to become a foundational platform for critical technology—benefit from a neutral home. As part of the Linux Foundation, PyTorch and its community will benefit from many programs and support infrastructure like training and certification programs, research, and local to global events. Working inside and alongside the Linux Foundation, PyTorch will have access to the LFX collaboration portal—enabling mentorships and helping the PyTorch community identify future leaders, find potential hires, and observe shared project dynamics.
“Growth around AI/ML and Deep Learning has been nothing short of extraordinary—and the community embrace of PyTorch has led to it becoming one of the five-fastest growing open source software projects in the world,” said Jim Zemlin, executive director for the Linux Foundation. “Bringing PyTorch to the Linux Foundation where its global community will continue to thrive is a true honor. We are grateful to the team at Meta—where PyTorch was incubated and grown into a massive ecosystem—for trusting the Linux Foundation with this crucial effort.”
“Some AI news: we’re moving PyTorch, the open source AI framework led by Meta researchers, to become a project governed under the Linux Foundation. PyTorch has become one of the leading AI platforms with more than 150,000 projects on GitHub built on the framework. The new PyTorch Foundation board will include many of the AI leaders who’ve helped get the community where it is today, including Meta and our partners at AMD, Amazon, Google, Microsoft, and NVIDIA. I’m excited to keep building the PyTorch community and advancing AI research,” said Mark Zuckerberg, Founder & CEO, Meta.
The Linux Foundation has named Dr. Ibrahim Haddad, its Vice President of Strategic Programs, as the Executive Director of the PyTorch Foundation. The PyTorch Foundation will support a strong member ecosystem with a diverse governing board including founding members: AMD, Amazon Web Services (AWS), Google Cloud, Meta, Microsoft Azure and NVIDIA. The project will promote continued advancement of the PyTorch ecosystem through its thriving maintainer and contributor communities. The PyTorch Foundation will ensure the transparency and governance required of such critical open source projects, while also continuing to support its unprecedented growth.
Member Quotes
AMD
“Open software is critical to advancing HPC, AI and ML research, and we’re ready to bring our experience with open software platforms and innovation to the PyTorch Foundation,” said Brad McCredie, corporate vice president, Data Center and Accelerated Processing, AMD. “AMD Instinct accelerators and ROCm software power important HPC and ML sites around the world, from exascale supercomputers at research labs to major cloud deployments showcasing the convergence of HPC and AI/ML. Together with other foundation members, we will support the acceleration of science and research that can make a dramatic impact on the world.”
Amazon Web Services
“AWS is committed to democratizing data science and machine learning, and PyTorch is a foundational open source tool that furthers that goal,” said Brian Granger, senior principal technologist at AWS. “The creation of the PyTorch Foundation is a significant step forward for the PyTorch community. Working alongside The Linux Foundation and other foundation members, we will continue to help build and grow PyTorch to deliver more value to our customers and the PyTorch community at large.”
Google Cloud
“At Google Cloud we’re committed to meeting our customers where they are in their digital transformation journey and that means ensuring they have the power of choice,” said Andrew Moore, vice president and general manager of Google Cloud AI and industry solutions. “We’re participating in the PyTorch Foundation to further demonstrate our commitment of choice in ML development. We look forward to working closely on its mission to drive adoption of AI tooling by building an ecosystem of open source projects with PyTorch along with our continued investment in JAX and Tensorflow.”
Microsoft Azure
“We’re honored to participate in the PyTorch Foundation and partner with industry leaders to make open source innovation with PyTorch accessible to everyone,” Eric Boyd, CVP, AI Platform, Microsoft, said. “Over the years, Microsoft has invested heavily to create an optimized environment for our customers to create, train and deploy their PyTorch workloads on Azure. Microsoft products and services run on trust, and we’re committed to continuing to deliver innovation that fosters a healthy open source ecosystem that developers love to use. We look forward to helping the global AI community evolve, expand and thrive by providing technical direction based on our latest AI technologies and research.”
NVIDIA
“PyTorch was developed from the beginning as an open source framework with first-class support on NVIDIA Accelerated Computing”, said Ian Buck, General Manager and Vice President of Accelerated Computing at NVIDIA. “NVIDIA is excited to be an originating member of the PyTorch Foundation to encourage community adoption and to ensure using PyTorch on the NVIDIA AI platform delivers excellent performance with the best experience possible.”
Additional Resources:
Visit pytorch.org to learn more about the project and the PyTorch Foundation
Founded in 2000, the Linux Foundation and its projects are supported by more than 3,000 members. The Linux Foundation is the world’s leading home for collaboration on open source software, hardware, standards, and data. Linux Foundation projects are critical to the world’s infrastructure including Linux, Kubernetes, Node.js, ONAP, Hyperledger, RISC-V, PyTorch, and more. The Linux Foundation’s methodology focuses on leveraging best practices and addressing the needs of contributors, users, and solution providers to create sustainable models for open collaboration. For more information, please visit us at linuxfoundation.org.
###
The Linux Foundation has registered trademarks and uses trademarks. For a list of trademarks of The Linux Foundation, please see our trademark usage page: https://www.linuxfoundation.org/trademark-usage. Linux is a registered trademark of Linus Torvalds.
Because of my position as Executive Producer and host of The Untold Stories of Open Source, I frequently get asked, “What podcasts do you listen to when you’re not producing your own.” Interesting question. However, my personal preference, This American Life, is more about how they create their shows, how they use sound and music to supplement the narration, and just in general, how Ira Glass does what he does. Only podcast geeks would be interested in that, so I reached out to my friends in the tech industry to ask them what THEY listen to.
The most surprising thing I learned was people professing to not listen to podcasts. “I don’t listen to podcasts, but if I had to choose one…”, kept popping up. The second thing was people in the industry need a break and use podcasts to escape from the mayhem of their day. I like the way Jennifer says it best, “Since much of my role is getting developers on board with security actions, I gravitate toward more psychology based podcasts – Adam Grant’s is amazing (it’s called WorkLife).”
Mike Jones and Mike LeBlanc built the H4unt3d Hacker podcast and group from a really grass roots point of view. The idea was spawned over a glass of bourbon on the top of a mountain. The group consists of members from around the globe and from various walks of life, religions, backgrounds and is all inclusive. They pride themselves in giving back and helping people understand the cybersecurity industry and navigate through the various challenges one faces when they decide cybersecurity is where they belong.
“I think he strikes a great balance between newbie/expert, current events and all purpose security and it has a nice vibe” – Alan Shimel, CEO, Founder, TechStrong Group
Published weekly, the Risky Business podcast features news and in-depth commentary from security industry luminaries. Hosted by award-winning journalist Patrick Gray, Risky Business has become a must-listen digest for information security professionals. We are also known to publish blog posts from time to time.
“My single listen-every-week-when-it-comes out is not that revolutionary: the classic Risky Biz security podcast. As a defender, I learn from the offense perspective, and they also aren’t shy about touching on the policy side.” – Allan Friedman, Cybersecurity and Infrastructure Security Agency
Hosted by Mike Shema, Matt Alderman, and John Kinsella
If you’re looking to understand DevOps, application security, or cloud security, then Application Security Weekly is your show! Mike, Matt, and John decrypt application development – exploring how to inject security into the organization’s Software Development Lifecycle (SDLC); learn the tools, techniques, and processes necessary to move at the speed of DevOps, and cover the latest application security news.
“Easily my favorite hosts and content. Professional production, big personality host, and deeply technical co-host. Combined with great topics and guests.” – Larry Maccherone, Dev[Sec]Ops Transformation Architect, Contrast Security
The Azure DevOps Podcast is a show for developers and devops professionals shipping software using Microsoft technologies. Each show brings you hard-hitting interviews with industry experts innovating better methods and sharing success stories. Listen in to learn how to increase quality, ship quickly, and operate well.
“I am pretty focused on Microsoft Azure these days so on my list is Azure DevOps” – Bob Aiello CM Best Practices Founder, CTO, and Principal Consultant
Hosted by Community of Chaos Engineering Practitioners
We are a community of chaos engineering practitioners. Chaos Engineering is the discipline of experimenting on a system in order to build confidence in the system’s capability to withstand turbulent conditions in production.
“This is so good, it’s hardly even fair to compare it to other podcasts!” – Casey Rosenthal, CEO, Co-founder, Verica
The Daily Beans is a women-owned and operated progressive news podcast for your morning commute brought to you by the webby award-winning hosts of Mueller, She Wrote. Get your social justice and political news with just the right amount of snark.
“The Daily Beans covers political news without hype. The host is a lawyer and restricts her coverage to what can actually happen while other outlets are hyping every possibility under the sun including possibilities that get good ratings but will never happen. She mostly covers the former president’s criminal cases.” – Tom Limoncelli, Manager, Stack Overflow
Software Engineering Radio is a podcast targeted at the professional software developer. The goal is to be a lasting educational resource, not a newscast. Now a weekly show, we talk to experts from throughout the software engineering world about the full range of topics that matter to professional developers. All SE Radio episodes feature original content; we don’t record conferences or talks given in other venues.
“The one that I love to keep tabs on is called Software Engineering Radio, published by the IEEE computer society. It is absolutely a haberdashery of new ideas, processes, lessons learned. It also ranges from very practical action oriented advice the whole way over to philosophical discussions that are necessary for us to drive innovation forward. Professionals from all different domains contribute. It’s not a platform for sales and marketing pitches!” – Tracy Bannon, Senior Principal/ Software Architect & DevOps Advisor, MITRE
Join thousands of other listeners to hear from the current leaders, experts, vendors, and instructors in the IT and Cybersecurity fields regarding DevSecOps, InfoSec, Ransomware attacks, the diversity and the retention of talent, and more. Gain the confidence, consistency, and courage to succees at work and in life.
“Relaxed chat, full of good info, and they got right to the point. Would recommend.” – Wendy Nather, Head of Advisory CISOs, CISCO
Open Source Underdogs is the podcast for entrepreneurs about open source software. In each episode, we chat with a founder or leader to explore how they are building thriving businesses around open source software. Our goal is to demystify how entrepreneurs can stay true to their open source objectives while also building sustainable, profitable businesses that fuel innovation and ensure longevity.
“Mike Schwartz’s podcast is my favourite. Really good insights from founders.” – Amanda Brock, CEO, OpenUK
Ten Percent Happier publishes a variety of podcasts that offer relatable wisdom designed to help you meet the challenges and opportunities in your daily life.
“I listen to Ten Percent Happier as my go-to podcast. It helps me with mindfulness practice, provides a perspective on real-life situations, and makes me a kinder person. That is one of the most important traits we all need these days.” – Arun Gupta, Vice President and General Manager for Open Ecosystem, Intel
Sam Harris is the author of five New York Times best sellers. His books include The End of Faith, Letter to a Christian Nation, The Moral Landscape, Free Will, Lying, Waking Up, and Islam and the Future of Tolerance (with Maajid Nawaz). The End of Faith won the 2005 PEN Award for Nonfiction. His writing and public lectures cover a wide range of topics—neuroscience, moral philosophy, religion, meditation practice, human violence, rationality—but generally focus on how a growing understanding of ourselves and the world is changing our sense of how we should live.
“Sam dives deep on topics rooted in our culture, business, and minds. The conversations are very approachable and rational. With some episodes reaching an hour or more, Sam gives topics enough space to cover the necessary angles.” – Derek Weeks, CMO, The Linux Foundation
Darknet Diaries produces audio stories specifically intended to capture, preserve, and explain the culture around hacking and cyber security in order to educate and entertain both technical and non-technical audiences.
This is a podcast about hackers, breaches, shadow government activity, hacktivism, cybercrime, and all the things that dwell on the hidden parts of the network.
“Darknet Diaries would be my recommendation. Provided insights into the world of hacking, data breaches and cyber crime. And Jack Rhysider is a good storyteller ” – Edwin Kwan, Head of Application Security and Advisory, Tyro Payments
Under the Skin asks: what’s beneath the surface – of the people we admire, of the ideas that define our times, of the history we are told. Speaking with guests from the world of academia, popular culture and the arts, they’ll teach us to see the ulterior truth behind or constructed reality. And have a laugh.
“He interviews influential people from all different backgrounds and covers everything from academia to tech to culture to spiritual issues” – Ashleigh Auld, Global Director Partner Marketing, Linnwood
The daily cybersecurity news and analysis industry leaders depend on. Published each weekday, the program also included interviews with a wide spectrum of experts from industry, academia, and research organizations all over the world.
“I’d recommend the CyberWire daily podcast has got most relevant InfoSec news items and stories industry pros care about. XX” – Ax Sharma, Security Researcher, Tech Reporter, Sonatype
7 Minute Security is a weekly audio podcast (once in a while with video!) released on Wednesdays and covering topics such Penetration testing, Blue teaming, and Building a career in security.
In 2013 I took on a new adventure to focus 100% on information security. There’s a ton to learn, so I wanted to write it all down in a blog format and share with others. However, I’m a family man too, and didn’t want this project to offset the work/family balance.
So I thought a podcast might fill in the gaps for stuff I can’t – or don’t have time to – write out in full form. I always loved the idea of a podcast, but the good ones are usually in a longer format, and I knew I didn’t have time for that either. I was inspired by the format of the 10 Minute Podcast and figured if it can work for comedy, maybe it can work for information security!
Thus, the 7 Minute Security blog and its child podcast was born.
“7 Minute Security Podcast – because Brian makes the best jingles!” – Björn Kimminich, Product Group Lead Architecture Governance, Kuehne + Nagel (AG & Co.) KG
Explores ideas that help to produce Better Software Faster: Continuous Delivery, DevOps, TDD and Software Engineering.
Hosted by Dave Farley – a software developer who has done pioneering work in DevOps, CD, CI, BDD, TDD and Software Engineering. Dave has challenged conventional thinking and led teams to build world class software.
Dave is co-author of the award wining book – “Continuous Delivery”, and a popular conference speaker on Software Engineering. He built one of the world’s fastest financial exchanges, is a pioneer of BDD, an author of the Reactive Manifesto, and winner of the Duke award for open source software – the LMAX Disruptor.
“Dave Farley’s videos are a treasure trove of knowledge that took me and others years to uncover when we were starting out. His focus on engineering and business outcomes rather than processes and frameworks is a breath of fresh air. If you only have time for one source of information, use his. – Bryan Finster, Value Stream Architect, Defense Unicorns
A fast and fluid weekly thirty minute show where Scott tears into the taxonomy of the tech business with unfiltered, data-driven insights, bold predictions, and thoughtful advice.
“Very current very modern. Business and tech oriented. Talks about markets and economics and people and tech.” – Caroline Wong, Chief Strategy Officer, Cobalt
We have a security tabletop game that Josh created some time ago. Rather than play a boring security tabletop exercise, what if had things like dice and fun? Take a look at the Dungeons and Data tabletop game
“It has been something I’ve been listening to a lot lately with all of the focus on Software Supply Chain Security and Open Source Security. The hosts have very deep software and security backgrounds but keep the show light-hearted and engaging as well. ” – Chris Hughes, CISO, Co-Founder Aquia Inc
Hosted by Kara Swisher and Professor Scott Galloway
Every Tuesday and Friday, tech journalist Kara Swisher and NYU Professor Scott Galloway offer sharp, unfiltered insights into the biggest stories in tech, business, and politics. They make bold predictions, pick winners and losers, and bicker and banter like no one else. After all, with great power comes great scrutiny. From New York Magazine and the Vox Media Podcast Network.
“As a rule, I don’t listen to tech podcasts much at all, since I write about tech almost all day. I check out podcasts about theater or culture — about as far away from my day job as I can get. However, I follow a ‘man-about-town’ guy named George Hahn on social media, who’s a lot of fun. Last year, he mentioned he’d be a guest host of the ‘Pivot’ podcast with Kara Swisher and Scott Galloway, so I checked out Pivot. It’s about tech but it’s also about culture, politics, business, you name it. So that’s become the podcast I dip into when I want to hear a bit about tech, but in a cocktail-party/talk show kind of way.” – Christine Kent, Communications Strategist, Christine Kent Communications
Conversations with experts about the important ideas changing how organizations compete and win. In The Idealcast, multiple award-winning CTO, researcher and bestselling author Gene Kim hosts technology and business leaders to explore the dangerous, shifting digital landscape. Listeners will hear insights and gain solutions to help their enterprises thrive in an evolving business world.
“I like this because it has a good balance of technical and culture/leadership content.” – Courtney Kissler, CTO, Zulily
Hosted by Dave Kennedy and Various Team Contributors
Our team records a regular podcast covering the latest security news and stories in an entertaining and informational discussion. Hear what our experts are thinking and talking about.
“I LOVE LOVE LOVE the TrustedSec Security Podcast. Dave Kennedy’s team puts on a very nice and often deeply technical conversation every two weeks. The talk about timely topics from today’s headlines as well as jumping into purple team hackery which is a real treat to listen in and learn from.” – CRob Robinson, Director of Security Communications Intel Product Assurance and Security, Intel
Ramblings about W. Edwards Deming in the digital transformation era. The general idea of the podcast is derived from Dr. Demming’s seminal work described in his New Economics book – System of Profound Knowledge ( SoPK ). We’ll try and get a mix of interviews from IT, Healthcare, and Manufacturing with the goal of aligning these ideas with Digital Transformation possibilities. Everything related to Dr. Deming’s ideas is on the table (e.g., Goldratt, C.I. Lewis, Ohno, Shingo, Lean, Agile, and DevOps).
“I don’t listen to podcasts much these days (found that consuming books via audible was more useful… but I guess it all depends on how emerging the topics are you are interested in). I only mention this as I am thin I recommendations. I’d go with John Willis’s Profound or Gene Kim’s Idealcast. Some overlap in (world class) guests but different interview approaches and perspectives.” – Damon Edwards, Sr. Director, Product PagerDuty
Stay up-to-date and deepen your cybersecurity acumen with Security Now. On this long-running podcast, cybersecurity authority Steve Gibson and technology expert Leo Laporte bring their extensive and historical knowledge to explore digital security topics in depth. Each week, they take complex issues and break them down for clarity and big-picture understanding. And they do it all in an approachable, conversational style infused with their unique sense of humor. Listen and subscribe, and stay on top of the constantly changing world of Internet security. Security Now records every Tuesday afternoon and hits your podcatcher later that evening.
“The shows cover a wide range of security topics, from the basics of technologies such as DNSSec & Bitcoin, to in depth, tech analysis of the latest hacks hitting the news, The main host, Steve Gibson, is great at breaking down tech subjects over an audio . It’s running at over 800 episodes now, regular as clockwork every week, so you can rely on it. Funnily Steve Gibson has often reminded me of you – able to assess what’s going on with a subject, calmly find the important points, and describe them to the rest of us in way that’s engaging and relatable.medium – in a way you can follow and be interested in during your commute or flight.” – Gary Robinson, Chief Security Officer, Ulseka
Today, The Jordan Harbinger Show has over 15 million downloads per month and features a wide array of guests like Kobe Bryant, Moby, Dennis Rodman, Tip “T.I.” Harris, Tony Hawk, Cesar Millan, Simon Sinek, Eric Schmidt, and Neil deGrasse Tyson, to name a few. Jordan continues to teach his skills, for free, at 6-Minute Networking. In addition to hosting The Jordan Harbinger Show, Jordan is a consultant for law enforcement, military, and security companies and is a member of the New York State Bar Association and the Northern California Chapter of the Society of Professional Journalists.
“Excellent podcasts where he interviews people from literally every walk of life, how they have become successful, why they have failed (if they have) as well as great personal development coaching ideas.” – Jeff DeVerter, CTO, Products and Services, RackSpace
Adam hosts WorkLife, a chart-topping TED original podcast. His TED talks on languishing, original thinkers, and givers and takers have been viewed more than 30 million times. His speaking and consulting clients include Google, the NBA, Bridgewater, and the Gates Foundation. He writes on work and psychology for the New York Times, has served on the Defense Innovation Board at the Pentagon, has been honored as a Young Global Leader by the World Economic Forum, and has appeared on Billions.
“I don’t listen to many technical podcasts. I like Caroline Wongs and have listened to it a number of times (Humans of InfoSec) but since much of my role is getting developers on board with security actions, I gravitate toward more psychology based podcasts – Adam Grant’s is amazing (it’s called WorkLife).” – Jennifer Czaplewski, Senior Director, Cyber Security, Target
“You know lately I have been listening to WorkLife with Adam Grant. Not a tech podcast but a management one.” – Paula Thrasher, Senior Director Infrastructure, PagerDuty
Hosted by Core Team Members: Betsy Beyer, MP English, Salim Virji, Viv
The Google Prodcast Team has gone through quite a few iterations and hiatuses over the years, and many people have had a hand in its existence. For the longest time, a handful of SREs produced the Prodcast for the listening pleasure of the other engineers here at Google.
We wanted to make something that would be of interest to folks across organizations and technical implementations. In his last act as part of the Prodcast, JTR put us in touch with Jennifer Petoff, Director of SRE Education, in order to have the support of the SRE organization behind us.
“The SRE Prodcast is Google’s podcast about Site Reliability Engineering and production software. In Season 1, we discuss concepts from the SRE Book with experts at Google.” – Jennifer Petoff, Director, Program Management, Cloud Technical Education Google
Every weekday, Kai Ryssdal and Kimberly Adams break down the news in tech, the economy and culture. How do companies make money from disinformation? How can we tackle student debt? Why do 401(k)s exist? What will it take to keep working moms from leaving the workforce? Together, we dig into complex topics to help make today make sense
“I literally learn 3 new things about topics i never would have tried to learn about.” – Kadi Grigg, Enablement Specialist, Sonatype
Conversations for the Curious is an award-winning weekly podcast hosted by Russ Roberts of Shalem College in Jerusalem and Stanford’s Hoover Institution. The eclectic guest list includes authors, doctors, psychologists, historians, philosophers, economists, and more. Learn how the health care system really works, the serenity that comes from humility, the challenge of interpreting data, how potato chips are made, what it’s like to run an upscale Manhattan restaurant, what caused the 2008 financial crisis, the nature of consciousness, and more.
“The only podcast I listen to is actually EconTalk, which has nothing to do with tech!” – Kelly Shortridge, Senior Principal, Product Technology, Fastly
The Future of Work With Jacob Morgan is a unique show that explores how the world of
work is changing, and what we need to do in order to thrive. Each week several episodes are
released which range from long-form interviews with the world’s top business leaders and
authors to shorter form episodes which provide a strategy or tip that listeners can apply to
become more successful.
The show is hosted by 4x best-selling author, speaker and futurist Jacob Morgan and the
goal is to give listeners the inspiration, the tools, and the resources they need to succeed
and grow at work and in life.
Episodes are not scripted which makes for fun, authentic, engaging, and educational
episodes filled with insights and practical advice.
“It is hard for me to keep up with podcasts. The one I listen to regularly is “Leading The Future of Work” by Jacob Morgan. I know it is not technical, but I think it is extremely important for technical people to understand what the business thinks and is concerned about.” – Keyaan Williams, Managing Director, CLASS-LLC
Deception, influence, and social engineering in the world of cyber crime.
Join Dave Bittner and Joe Carrigan each week as they look behind the social engineering scams, phishing schemes, and criminal exploits that are making headlines and taking a heavy toll on organizations around the world.
“In case we needed any reminders that humanity is a scary place.” – Matt Howard, SVP and CMO, Virtu
Hosted by Ashish Rajan, Shilpi Bhattacharjee, and Various Contributors
Cloud Security Podcast is a WEEKLY Video and Audio Podcast that brings in-depth cloud security knowledge to you from the best and brightest cloud security experts and leaders in the industry each week over our LIVE STREAMs.
We are the FIRST podcast that carved the niche for Cloud Security in late 2019. As of 2021, the large cloud service providers (Azure, Google Cloud, etc.) have all followed suit and started their own cloud security podcasts. While we recommend you listen to their podcasts as well, we’re the ONLY VENDOR NEUTRAL podcast in the space and will preserve our neutrality indefinitely.
“I really love Ashish’s cloud security podcast, listened to it for a while now. He gets really good people on it and it’s a nice laid back listen, too.” – Simon Maple, Field CTO, Snyk
Hosted by Glenn Wilson, Steve Giguere, Jessica Cregg
In depth conversations with influencers blurring the lines between Dev, Sec, and Ops!
We speak with professionals working in cyber security, software engineering and operations to talks about a number of DevSecOps topics. We discuss how organisations factor security into their product delivery cycles without compromising the value of doing DevOps and Agile.
“One of my favourite meetups in London ‘DevSecOps London Gathering’ has a podcast where they invite their speakers https://dsolg.com/#podcast” – Stefania Chaplin, Solutions Architect UK&I, GitLab
Longtime sportswriters Tony Kornheiser and Mike Wilbon debate and discuss the hottest topics, issues and events in the world of sports in a provocative and fast-paced format.
Similar in format to Gene Siskel and Roger Ebert‘s At the Movies,[2][3]PTI is known for its humorous and often loud tone, as well as the “rundown” graphic which lists the topics yet to be discussed on the right-hand side of the screen. The show’s popularity has led to the creation of similar shows on ESPN and similar segments on other series, and the rundown graphic has since been implemented on the morning editions of SportsCenter, among many imitators.[4] – Wikipedia
“I’m interested in sports, and Tony and Mike are well-informed, amusing, and opinionated. It also doesn’t hurt any that I’ve known them since they were at The Washington Post and I was freelancing there. What you see on television, or hear on their podcast, is exactly how they are in real life. This sincerity of personality is a big reason why they’ve become so successful.” – Steven Vaughan-Nichols, Technology and business journalist and analyst. Red Ventures
This post originally appeared on LF Networking’s blog. The author, Heather Kirksey, is VP Community & Ecosystem. ONE Summit is the Linux Foundation Networking event that focuses on the networking and automation ecosystem that is transforming public and private sector innovation across 5G network edge, and cloud native solutions. Our family of open source projects address every layer of infrastructure needs from the user edge to the cloud/core. Attend ONE Summit to get the scoop on hot topics for 2022!
Today LF Networking announced our schedule for ONE Summit, and I have to say that I’m extraordinarily excited. I’m excited because it means we’re growing closer to returning to meeting in-person, but more importantly I was blown away by the quality of our speaking submissions. Before I talk more about the schedule itself, I want to say that this quality is all down to you: You sent us a large number of thoughtful, interesting, and innovative ideas; You did the work that underpins the ideas; You did the work to write them up and submit them. The insight, lived experience, and future-looking thought processes humbled me with its breadth and depth. You reminded me why I love this ecosystem and the creativity within open source. We’ve all been through a tough couple of years, but we’re still here innovating, deploying, and doing work that improves the world. A huge shout out to everyone across every company, community, and project that made the job of choosing the final roster just so difficult.
Now onto the content itself. As you’ve probably heard, we’ve got 5 tracks: Industry 4.0, Security and Privacy, The New Networking Stack, Operationalizing Deployment, and Emerging Technologies and Business Models:
“Industry 4.0” looks at the confluence of edge and networking technologies that enable technology to uniquely improve our interactions with the physical world, whether that’s agriculture, manufacturing, robotics, or our homes. We’ve got a great line-up focused both on use cases and the technologies that enable them.
“Security and Privacy” are the most important issues with which we as global citizens and we as an ecosystem struggle. Far from being an afterthought, security is front and center as we look at zero-trust and vulnerability management, and which technologies and policies best serve enterprises and consumers.
Technology is always front and center for open source groups and our “New Networking Stack” track dives deep into the technologies and components we will all use as we build the infrastructure of the future. In this track we have a number of experts sharing their best practices, as well as ideas for forward-looking usages.
In our “Operationalizing Deployment” track, we learn from the lived experience of those taking ideas and turning them into workable reality. We ask questions like, How do you bridge cultural divides? How do you introduce and truly leverage DevOps? How do you integrate compliance and reference architectures? How do you not only deploy but bring in Operations? How do you automate and how to you use tools to accomplish digital transformation in our ecosystem(s)?
Not just content focusing only on today’s challenges and success, we look ahead with “Emerging Technologies and Business Models.” Intent, Metaverse, MASE, Scaling today’s innovation to be tomorrow’s operations, new takes on APIs – these are the concepts that will shape us in the next 5-10 years; we talk about how we start approaching and understanding them?
Every talk that made it into this program has unique and valuable insight, and I’m so proud to be part of the communities that proposed them. I’m also honored to have worked with one of the best Programming Committees in open source events ever. These folks took so much time and care to provide both quantitative and qualitative input that helped shape this agenda. Please be sure to thank them for their time because they worked hard to take the heart of this event to the next level. If you want to be in the room and in the hallway with these great speakers, there is only ONE place to be. Early bird registration ends soon, so don’t miss out and register now!
And please don’t forget to sponsor. Creating a space for all this content does cost money, and we can’t do it without our wonderful sponsors. If you’re still on the fence, please consider how amazing these sessions are and the attendee conservations they will spark. We may not be the biggest conference out there, but we are the most focused on decision makers and end users and the supply chains that enable them. You won’t find a more engaged and thoughtful audience anywhere else.
Is your organization consuming open source software, or is it starting to contribute to open source projects? If so, perhaps it’s time for you to start an OSPO: an open source program office.
In a new Linux Foundation Research report, A Deep Dive into Open Source Program Offices, published in partnership with the TODO Group, authored by Dr. Ibrahim Haddad, Ph.D, showcases the many forms of OSPOs, their maturity models, responsibilities, and challenges they face in open source enterprise adoption, and also their staffing requirements are discussed in detail.
“The past two decades have accelerated open source software adoption and increased involvement in contributing to existing projects and creating new projects. Software is where a lot of value lies and the vast majority of software developed is open source software providing access to billions of dollars worth of external R&D. If your organization relies on open source software for products or services and does not have a formalized OSPO yet to manage all aspects of working with open source, please consider this report a call to establish your OPSO and drive for leadership in the open source areas that are critical to your products and services.” – Ibrahim Haddad, Ph.D., General Manager, LF AI & Data Foundation
An OSPO can help you manage and track your company’s use of open source software and assist you when interacting with other stakeholders. It can also serve as a clearinghouse for information about open source software and its usage throughout your organization.
Your OSPO is the central nervous system for an organization’s open source strategy and provides governance, oversight, and support for all things related to open source.
OSPOs create and maintain an inventory of your open source software (OSS) assets and track and manage any associated risks. The OSPO also guides how to best use open source software within the organization and can help coordinate external contributions to open source projects.
To be effective, the OSPO needs to have a deep understanding of the business and the technical aspects of open source software. It also needs to work with all levels of the organization, from executives to engineers.
An OSPO is designed to:
Be the center of competency for an organization’s open source operations and structure,
Place a strategy and set of policies on top of an organization’s open source efforts.
This can include creating policies for code use, distribution, selection, auditing, and other areas; training developers; ensuring legal compliance, and promoting and building community engagement to benefit the organization strategically.
An organization’s OSPO can take many different forms, but typically it is a centralized team that reports to the company’s executive level. The size of the team will depend on the size and needs of the organization, and how it is adopted also will undergo different stages of maturity.
When starting, an OSPO might just be a single individual or a very small team. As the organization’s use of open source software grows, the OSPO can expand to include more people with different specialties. For example, there might be separate teams for compliance, legal, and community engagement.
This won’t be the last we have to say about the OSPO in 2022. There are further insights in development, including a qualitative study on the OSPO’s business value across different sectors, and the TODO group’s publication of the 2022 OSPO Survey results will take place during OSPOCon in just a few weeks.
“There is no board template to build an OSPO. Its creation and growth can vary depending on the organization’s size, culture, industry, or even its milestones.
That’s why I keep seeing more and more open source leaders finding critical value in building connections with other professionals in the industry. OSPOCon is an excellent networking and learning space where those working (or willing to work) in open source program offices that rely on open source technologies come together to learn and share best practices, experiences, and tools to overcome challenges they face.” Ana Jiménez, OSPO Program Manager at TODO Group
Join us there and be sure to read the report today to gain key insights into forming and running an OSPO in your organization.
June 2022 saw the publication of Addressing Cybersecurity Challenges in Open Source Software, a joint research initiative launched by the Open Source Security Foundation in collaboration with Linux Foundation Research and Snyk. The research dives into security concerns in the open source ecosystem. If you haven’t read it, this article will give you the report’s who, what, and why, summarizing its key takeaways so that it can be relevant to you or your organization.
Who is the report for?
This report is for everyone whose work touches open source software. Whether you’re a user of open source, an OSS developer, or part of an OSS-related institution or foundation, you can benefit from a better understanding of the state of security in the ecosystem.
Open source consumers and users: It’s very likely that you rely on open source software as dependencies if you develop software. And if you do, one important consideration is the security of the software supply chain. Security incidents such as log4shell have shown how open source supply chain security touches nearly every industry. Even industries and organizations that have traditionally not focused on open source software now realize the importance of ensuring their OSS dependencies are secure. Understanding the state of OSS security can help you to manage your dependencies intelligently, choose them wisely, and keep them up to date.
Open source developers and maintainers: People and organizations that develop or maintain open source software need to ensure they use best practices and policies for security. For example, it can be valuable for large organizations to have open source security policies. Moreover, many OSS developers also use other open source software as dependencies, making understanding the OSS security landscape even more valuable. Developers have a unique role to play in leading the creation of high-quality code and the respective governance frameworks and best practices around it.
Institutions: Institutions such as open source foundations, funders, and policymaking groups can benefit from this report by understanding and implementing the key findings of the research and their respective roles in improving the current state of the OSS ecosystem. Funding and support can only go to the right areas if priorities are informed by the problems the community is facing now, which the research assists in identifying.
What are the major takeaways?
The data from this report was collected by conducting a worldwide survey of:
Individuals who contribute to, use, or administer OSS;
Maintainers, core contributors, and occasional contributors to OSS;
Developers of proprietary software who use OSS; and
Individuals with a strong focus on software supply chain security
The survey also included data collected from several major package ecosystems by using Snyk Open Source, a static code analysis (SCA) tool free to use for individuals and open source maintainers.
Here are the major takeaways and recommendations from the report:
Too many organizations are not prepared to address OSS security needs: At least 34% of organizations did not have an OSS security policy in place, suggesting these organizations may not be prepared to address OSS security needs.
Small organizations must prioritize developing an OSS security policy: Small organizations are significantly less likely to have an OSS security policy. Such organizations should prioritize developing this policy and having a CISO and OSPO (Open Source Program Office).
Using additional security tools is a leading way to improve OSS security: Security tooling is available for open source security across the software development lifecycle. Moreover, organizations with an OSS security policy have a higher frequency of security tool use than those without an OSS security policy.
Collaborate with vendors to create more intelligent security tools: Organizations consider that one of the most important ways to improve OSS security across the supply chain is adding greater intelligence to existing software security tools, making it easier to integrate OSS security into existing workflows and build systems.
Implementing best practices for secure software development is the other leading way to improve OSS security: Understanding best practices for secure software development, through courses such as the OpenSSF’s Secure Software Development Fundamentals Courses, has been identified repeatedly as a leading way to improve OSS supply chain security.
Use automation to reduce your attack surface: Infrastructure as Code (IaC) tools and scanners allow automating CI/CD activities to eliminate threat vectors around manual deployments.
Consumers of open source software should give back to the communities that support them: The use of open source software has often been a one-way street where users see significant benefits with minimal cost or investment. For larger open source projects to meet user expectations, organizations must give back and close the loop by financially supporting OSS projects they use.
Why is this important now?
Open source software is a boon: its collaborative and open nature has allowed society to benefit from various innovative, reliable, and free software tools. However, these benefits only last when users contribute back to open source software and when users and developers exercise due diligence around security. While the most successful open source projects have gotten such support, other projects have not – even as open source use has continued to be more ubiquitous.
Thus, it is more important than ever to be aware of the problems and issues everyone faces in the OSS ecosystem. Some organizations and open source maintainers have strong policies and procedures for handling these issues. But, as this report shows, other organizations are just facing these issues now.
Finally, we’ve seen the risks of not maintaining proper security practices around OSS dependencies. Failure to update open source dependencies has led to costs as high as $425 million. Given these risks, a little investment in strong security practices and awareness around open source – as outlined in the report’s recommendations – can go a long way.
We suggest you read the report – then see how you or your organization can take the next step to keep yourself secure!