This is an interesting networking vendor - IO River has developed an overlay for CDNs that makes use of WASM (gotta love WebAssembly). It just announced a nice $20M fund raise today too. “We are leveraging the existing physical networks, like Akamai, CloudFlare, Fastly and AWS, but we are also leveraging their edge compute,” Edward Tsinovoi told me. “On top of that, we provide a virtual layer of edge compute. So customers write their code once and it will be distributed and run on these different platforms.”
IO River Raises $20M, Leverages WASM for CDN Edge Compute
More Relevant Posts
-
In AWS, we have Application Load Balancer and Network Load Balancer. If you understand networking, you already know the OSI model has 7 layers. Layer 7 handles application protocols like HTTP/HTTPS, while Layer 4 works at the transport level. That’s why Network Load Balancer (Layer 4) is significantly faster in AWS, it operates closer to the network, with less overhead.
To view or add a comment, sign in
-
-
🚀 Load Balancer vs. Ingress Controller in Kubernetes – What’s the Difference? When exposing applications in Kubernetes, should you use a Load Balancer or an Ingress Controller? 🔀 Both handle traffic, but choosing the right one impacts scalability, cost, and security! 🔀 Load Balancer (L4 - Network Layer) A Load Balancer distributes external traffic to Kubernetes services at the network layer (L4 – TCP/UDP). Cloud providers (AWS, Azure, GCP) provision external load balancers when a Service type is set to LoadBalancer. When to Use a Load Balancer? ✔️ When you need to expose a single service directly to the internet. ✔️ For external traffic distribution with minimal setup. ✔️ If you need low-latency load balancing at the network level. Benefit 👇 ✔️ Simple to configure – define a Service as LoadBalancer. ✔️ Efficient traffic routing at Layer 4 (TCP/UDP). ✔️ Cloud-managed (AWS ELB, Azure LB, GCP LB). 🔀 Ingress Controller (L7 - Application Layer) An Ingress Controller manages HTTP(S) traffic at the application layer (L7), routing requests based on hostnames, paths, and SSL/TLS termination. It exposes multiple services via a single entry point using an Ingress resource. When to Use an Ingress Controller? ✔️ When you need to expose multiple services under a single external IP. ✔️ For host/path-based routing (e.g., api.example.com → Service A, web.example.com → Service B). ✔️ If you need TLS termination, rate limiting, and authentication. Benefit 👇 ✔️ Consolidates traffic through one IP → Cost-effective. ✔️ Supports advanced routing (host, path, and header-based). ✔️ TLS termination & security features (authentication, rate-limiting). #DevOps #AWS #CloudComputing #CloudArchitecture #Networking #AWSSecurity #InfrastructureAsCode #SiteReliabilityEngineering
To view or add a comment, sign in
-
-
Designed and deployed a resilient, multi-tier infrastructure on AWS to support a scalable web application. Key implementation details include: Network Segmentation: Established a custom VPC featuring a redundant subnet strategy (2 Public, 2 Private) across multiple Availability Zones to ensure high availability. Dynamic Scaling: Implemented an Auto Scaling Group (ASG) of EC2 instances to automatically adjust capacity based on real-time traffic demands. Traffic Management: Deployed an Application Load Balancer (ALB) within the public subnets to optimize workload distribution and enhance user latency. Secure Connectivity: Integrated an Internet Gateway (IGW) for external access and a NAT Gateway to facilitate secure, outbound-only updates for private instances. Tiered Security: Enforced a "Least Privilege" security model using Security Groups, restricting the web tier to only accept traffic originating from the ALB and shielding the application from the public internet.
To view or add a comment, sign in
-
-
Putting your entire application on a vendor's proprietary compute platform isn't optimisation, it's surrender. Cloudflare is pushing its Workers runtime as a 'full-stack' solution. But its true strategic value for a UK SME is its massive, free-to-use edge network. Focus on leveraging its incredible caching and security features, not its lock-in. Keep your core Django or Rails app on portable, high-density compute you control (we like Hetzner via Coolify). Use Cloudflare's free tiers as a zero-cost global shield and accelerator. This decoupling gives you 100% infrastructure portability and predictable costs as you scale. We've been helping businesses optimise their cloud spend and architecture for over 15 years. Are you finding it harder to predict your monthly infrastructure bill? #CloudStrategy #TechSMEs #Infrastructure #CostOptimisation #Criztec
To view or add a comment, sign in
-
-
Cloud-Native Network Functions aren’t just virtualized workloads, they demand a different operational mindset. This article looks at how CNFs behave in real deployments, how Kubernetes orchestration impacts network performance, and what teams should consider when designing carrier-grade cloud-native services. Take a look: https://lnkd.in/d7FfkBxS #CNF #Kubernetes #CloudInfrastructure #TelecomEngineering
To view or add a comment, sign in
-
Why do CI/CD pipelines slow down as teams grow? At scale, you hit three walls: • Thundering Herd: Massive concurrent requests saturate shared registry servers. • Network Latency: Cross-region fetches add minutes of idle time. • Redundant Egress: We’ve seen SaaS platforms discover that 94% of their network traffic was just the same artifacts being downloaded repeatedly across teams. One global financial institution found that nearly half its CI latency came from artifact pulls alone. If every build depends on a distant registry, no amount of pipeline optimization will keep up. The fastest pipelines are localized. Learn more: https://lnkd.in/dbrjqS2q
To view or add a comment, sign in
-
As teams grow, CI/CD performance problems are less about tooling and more about distribution. Artifact pulls alone can account for almost half of total CI latency. Fast pipelines start with localized delivery. This is exactly the kind of problem Varnish helps solve.
Director of Engineering | CDN, Live & VOD Streaming, Edge Architectures | Growth | Leading Distributed Engineering Teams
Why do CI/CD pipelines slow down as teams grow? At scale, you hit three walls: • Thundering Herd: Massive concurrent requests saturate shared registry servers. • Network Latency: Cross-region fetches add minutes of idle time. • Redundant Egress: We’ve seen SaaS platforms discover that 94% of their network traffic was just the same artifacts being downloaded repeatedly across teams. One global financial institution found that nearly half its CI latency came from artifact pulls alone. If every build depends on a distant registry, no amount of pipeline optimization will keep up. The fastest pipelines are localized. Learn more: https://lnkd.in/dbrjqS2q
To view or add a comment, sign in
-
🚦 One AWS service that quietly keeps your application alive..! ➡️ Application Load Balancer (ALB) Most people think ALB is just “traffic distribution”. That’s only 10% of the story. What ALB actually does in real systems ? 🔹 Understands HTTP/HTTPS (Layer 7) 🔹 Routes traffic based on URL paths & hostnames 🔹 Performs continuous health checks 🔹 Automatically removes unhealthy targets 🔹 Spreads traffic across multiple AZs 🔹 Works seamlessly with Auto Scaling, WAF, ACM In production, this means: ✅ Zero-downtime deployments ✅ No single-instance failures ✅ Cleaner microservice routing ✅ Safer public exposure of apps A simple example: /login → auth service /orders → order service /payments → payment service Same load balancer. Smart routing. No extra IPs. The real lesson? 💡 ALB isn’t about load — it’s about resilience and control. If your application is still tied directly to EC2 public IPs, you’re missing one of AWS’s most powerful reliability features. #AWS #ApplicationLoadBalancer #CloudArchitecture #DevOps #HighAvailability #SystemDesign
To view or add a comment, sign in
-
-
Token Bucket Algorithm – Rate Limiting Made Simple In high-traffic systems, controlling how many requests a service can handle is critical. That’s where the Token Bucket Algorithm comes in. 👉 It allows requests to be processed at a controlled rate, while still handling short bursts efficiently. 📌 How it works Tokens are added to a bucket at a fixed rate Each request consumes one token If the bucket is empty → request is rejected or delayed 📌 Example Bucket capacity: 10 tokens Refill rate: 1 token/second ✔️ Up to 10 requests can be served instantly (burst) ❌ More requests must wait until new tokens are added 🔹 Why Token Bucket? Allows bursts (unlike Leaky Bucket) Simple and efficient Widely used in APIs & gateways 🔹 Real-world Usage API rate limiting (AWS, Stripe, Google APIs) Load balancers Distributed systems throttling #TokenBucket #RateLimiting #SystemDesign #DistributedSystems #BackendEngineering #APIs
To view or add a comment, sign in
-
-
🚀 Is latency killing your global application's user experience? If you have users distributed globally, relying on the public internet can introduce massive variability and risks. Here is why you should look at AWS Global Accelerator: Without acceleration, traffic hops through multiple networks and ISPs to reach your application, impacting performance. The Solution: Global Accelerator uses the AWS global network to route user traffic to the closest edge location. Static IPs: It provides two static IP addresses that act as a fixed entry point to your application endpoints (EC2, Load Balancers, etc.). Instant Failover: It automatically re-routes traffic to healthy endpoints to ensure high availability. 💡 Pro Tip: You can even use your own IP addresses (BYOIP) if you have hardcoded dependencies
To view or add a comment, sign in
-
Explore related topics
Explore content categories
- Career
- Productivity
- Finance
- Soft Skills & Emotional Intelligence
- Project Management
- Education
- Technology
- Leadership
- Ecommerce
- User Experience
- Recruitment & HR
- Customer Experience
- Real Estate
- Marketing
- Sales
- Retail & Merchandising
- Science
- Supply Chain Management
- Future Of Work
- Consulting
- Writing
- Economics
- Artificial Intelligence
- Employee Experience
- Workplace Trends
- Fundraising
- Networking
- Corporate Social Responsibility
- Negotiation
- Communication
- Engineering
- Hospitality & Tourism
- Business Strategy
- Change Management
- Organizational Culture
- Design
- Innovation
- Event Planning
- Training & Development