Performance Evaluation and Optimization Guide for 10Mbps Bandwidth Servers
When your server has only a meager 10Mbps bandwidth, how do you ensure the smooth operation of critical services? This article will guide you through a comprehensive evaluation of your server’s network performance, identify performance bottlenecks, and provide a step-by-step optimization guide to help your low-bandwidth server deliver exceptional performance.
Evaluating Existing Bandwidth
Before formulating a network optimization strategy, we need to clarify the server’s actual bandwidth situation. There are many network speed testing tools available, but most of them only focus on download speed, while upload speed and latency are more reflective of the server’s network quality. Here, we recommend using iperf for comprehensive performance testing.
First, install iperf on the server:
yum install iperf
Then, install iperf on another host (preferably in the network where your target users are located) to act as the client. Start the iperf server on the server:
iperf -s
On the client, start the iperf client to test the network quality to the server:
iperf -c Server IP -d -t 60 -P 10
This command will create 10 concurrent connections, test the server’s upload and download speeds bidirectionally, and last for 60 seconds. After the test, you can see detailed bandwidth and latency reports. Run it several times, take the average, and you’ll get fairly accurate network performance data.
Estimating Business Traffic
After understanding the server’s actual network performance, the next step is to assess the traffic requirements of the business. Different applications have vastly different degrees of dependence on the network. For example, video streaming and online conferencing are bandwidth-intensive, while API services and static websites have lower bandwidth requirements.
We can analyze the application’s access logs to derive the network traffic distribution. For example, Nginx’s access log:
tail -n 10000 /var/log/nginx/access.log | awk '{sum+=$NF} END {print sum/10000/1024}'
This command analyzes the last 10,000 request records and calculates the average traffic per request (in KB). Assuming the website’s daily average PV is 500,000, the daily traffic is roughly:
500000 * Average traffic per request (KB) / 1024 / 8 (in Mbps)
Similarly, we can collect the outbound traffic of application servers, database query traffic, etc., to comprehensively assess the overall traffic scale of the business and see if there are any bandwidth bottleneck risks.
Implementing Bandwidth Optimization
When the estimated business traffic approaches or even exceeds the server’s bandwidth limit, optimization measures need to be taken.
Content Caching
Caching hot content to reduce the server’s repeated transmission is the most effective way to save bandwidth. Caching can be added at different levels:
- Browser caching: Reasonably set the Cache-Control/Expires headers of resources to allow static content to be cached longer on the user side
- CDN caching: Utilize the caching capabilities of CDN edge nodes to allow users to obtain resources nearby
- Application caching: Cache commonly used data and computation results locally on the application server to reduce access to downstream services such as databases
For example, in Nginx, you can easily enable static resource caching:
location ~* \.(jpg|png|gif|js|css)$ {
expires 7d;
}
Traffic Prioritization
If bandwidth requirements still cannot be met after caching, you can consider sacrificing some non-core traffic to ensure the experience of critical services. Linux servers have a built-in powerful traffic control tool, tc, which can implement advanced features such as QoS and traffic shaping.
For example, set the priority of management traffic such as sshd and rsync to the lowest:
tc qdisc add dev eth0 root handle 1: prio
tc filter add dev eth0 parent 1: protocol ip prio 4 u32 match ip dport 22 0xffff flowid 1:4
tc filter add dev eth0 parent 1: protocol ip prio 4 u32 match ip sport 22 0xffff flowid 1:4
tc filter add dev eth0 parent 1: protocol ip prio 4 u32 match ip dport 873 0xffff flowid 1:4
Set the priority of critical business traffic, such as databases, to the highest:
tc filter add dev eth0 parent 1: protocol ip prio 1 u32 match ip dport 3306 0xffff flowid 1:1
This way, the smoothness of core services can be prioritized when bandwidth is tight.
Continuous Monitoring and Optimization
Network optimization is not a one-time effort, but a continuous process of iteration. After implementing bandwidth optimization, it is necessary to continuously monitor the server’s network performance and service quality to promptly identify and resolve new issues.
There are many monitoring tools available, such as Zabbix and Prometheus. They can collect real-time traffic data from servers and configure alert thresholds. When a metric exceeds the preset threshold, the system will notify the administrator to intervene.
In addition, user feedback is also an important monitoring method. When users report that access to a service is slow, it should be taken seriously, and network bottlenecks should be investigated immediately.
It should be emphasized that the optimization space for low-bandwidth servers is always limited. If the business grows too fast, 10Mbps bandwidth will eventually become insufficient. At this point, it is necessary to consider upgrading the server configuration or distributing traffic across multiple servers. Network planning should be done in advance and within one’s means.