A load balancer distributes incoming network traffic between multiple Flexible Compute Unit (FCU) instances of the public Cloud or of a Virtual Private Cloud (VPC) to avoid overload and to improve the availability and reliability of your services.

You can:

  • Load balance using the TCP/SSL and HTTP/HTTPS protocols

  • Distribute the load between several Availability Zones (AZs) or subnets
  • Set up health checks to verify the status of each instance and send traffic only to the healthy ones
For the SSL protocol, 3DS OUTSCALE load balancers support the successor protocols TLS 1.1 and TLS 1.2. They do not support TLS 1.0 and deprecated versions of the original SSL.

The following topics are discussed: 


General Information

Load Balancers and Back-end Instances

The instances registered with a load balancer are called back-end instances, or backends. Once an instance is registered with a load balancer, it receives front-end requests. You can register as many back-end instances as you need with a load balancer, and register or deregister back-end instances at any time depending on your needs. An instance that has been deregistered from a load balancer can be registered again with the same load balancer if needed.

In the public Cloud, you can load balance traffic between instances placed in the same AZ or in different AZs of a same Region. In a VPC, you can load balance traffic between instances placed in one or more subnets, in a same AZ or in different AZs of a same Region, that you specify when creating the load balancer. For more information, see About VPCs and Regions, Endpoints and Availability Zones Reference.

Depending on how the security groups of back-end instances are configured, a back-end instance can receive only the traffic coming from the load balancer or both the traffic coming from the load balancer and from elsewhere (another load balancer, directly from the Internet, and so on).

Load balancers work in round-robin mode: front-end requests are evenly distributed to back-end instances by turns. The amount of front-end requests received by one back-end instance corresponds to the total number of front-end requests divided by the total number of back-end instances.

The load balancer checks the health of back-end instances and then distributes the traffic load between all the healthy ones. For more information about health checks, see Configuring Health Checks.

As you can configure only one type of health checks per load balancer, specifying the back-end instances port and protocol to check, we strongly recommend to create one load balancer per service to avoid any undetected failure.


Load Balancers Types

A load balancer can be either Internet-facing or internal:

  • An Internet-facing load balancer can be created in either the public Cloud or in a VPC. This type of load balancer distributes inbound flows coming from the Internet between its back-end instances that can be placed in different AZs of the public Cloud or different subnets of a same VPC. This load balancer has a public DNS name and you can access it through the Internet.


  • An internal load balancer can only be created in a VPC. This type of load balancer distributes the traffic load between its back-end instances within one or more subnets of the VPC. This load balancer has a private DNS name and you can only access it within the CIDR of the VPC.

DNS Name and Listeners

A DNS name is automatically assigned to each load balancer and is composed of the name of the load balancer and its endpoint (load-balancer-name.endpoint). For more information, see Regions, Endpoints and Availability Zones Reference

This DNS name enables you to reach your load balancer and send requests to it. Internet-facing load balancers receive a public DNS name, while internal load balancers receive a private one. 

Each load balancer must be created with a listener to be able to receive requests. A listener corresponds to the process handling the requests coming to the load balancer from the Internet or from a corporate network, configured with a protocol and a port, using a port between 1 and 65535 both included.

You also configure the protocol and port to route traffic to back-end instances.

3DS OUTSCALE load balancers support HTTP/HTTPS and TCP/SSL protocols. Secure protocols are only supported between the client and the load balancer. The front-end and back-end protocols must be on the same level, therefore:

  • If the front-end protocol is HTTP or HTTPS, the back-end protocol must be HTTP. 
  • If the front-end protocol is TCP or SSL, the back-end protocol must be TCP.

The port number on which back-end instances are listening must be between 1 and 65535 both included. 

You cannot modify the configuration of a listener once it is created. However, you can add as many listeners as you need to a load balancer, and remove them if needed. For more information, see Adding or Deleting Listeners.

You can also manage the behavior of a load balancer using listener rules. These rules enable you to redirect traffic to a specific backend instance based on a path in the URI of the request. For more information, see Creating a Listener Rule.


Sticky Sessions

By default, a load balancer distributes each network request independently, which means that two successive requests of the same user may be routed to two different back-end instances. However, you can use a sticky session policy to bind the user to the back-end instance that handled the first request.

Sticky sessions work by creating a stickiness cookie with a specific duration. They are available for load balancers with HTTP and HTTPS listeners only. When the stickiness cookie expires, the sticky session is reset and the next request creates a new sticky session.

There are two types of sticky session policies:

  • Duration-based: The stickiness cookie has a specific time duration.
  • Application-controlled: The stickiness cookie has the same duration as a cookie of the back-end instance.
If the back-end instance that the user is bound to becomes unhealthy, the sticky session is reset among the remaining healthy instances. The session is then stuck to another instance until the cookie expires, even if the original instance becomes healthy again.

For more information, see Configuring Sticky Sessions for Your Load Balancers.



SSL Termination and SSL Passthrough for Load Balancers

You can create a listener with an x509 SSL server certificate for a load balancer to enable encrypted traffic in SSL or HTTPS protocol between the client initiating SSL or HTTPS connections and the load balancer. x509 server certificates are delivered by a certification authority, and contain authentication information like a public key and the signature of the certification authority. This certificate is used by the load balancer to decrypt connection requests from this client, which are then forwarded to its registered back-end instances in the protocol of your choice (TCP or HTTP).

The certificate used by the load balancer for SSL termination can be replaced at any time. For more information, see Replacing the SSL Certificate Used by an HTTPS or SSL Load Balancer.

SSL termination ensures the confidentiality of the communication between the client and the load balancer by checking the authentication information. The communication between your load balancer and its registered back-end instances is unencrypted and its security is ensured by the rules you add to the security groups of your back-end instances. You may need to use load balancers SSL termination for cases where you have to ensure confidentiality, for example, a website that requires a login and password authentication.

You can also forward HTTPS flows to back-end instances that have an SSL certificate using the TCP protocol. With this method, called SSL passthrough, the server certificate is uploaded on the back-end instances instead of the load balancer. The load balancer does not decrypt data flowing between the client and the back-end instances through TCP protocol.

For more information about how to configure load balancer listeners for load balancers with SSL termination or SSL passthrough, see Configuring a Load Balancer for SSL Termination or SSL Passthrough.

You can create the SSL or HTTPS listener when creating the load balancer, or you can add it to the load balancer at any time. You first need to upload your server certificate in Elastic Identity Management (EIM), and then specify it when creating the listener using its Outscale Resource Name (ORN). For more information, see Working with Server Certificates.

It is recommended to use one certificate per load balancer.





Windows® is a registered trademark of Microsoft Corporation in the United States and/or other countries.

AWS™ and Amazon Web Services™ are trademarks of Amazon Technologies, Inc or its affiliates in the United States and/or other countries.

See Legal Mentions.