Jump to content
Guest

[PSA: Server Admins] How to Optimize your DayZ Server for MAXIMUM Performance.

Recommended Posts

Guest

Gotcha :]

^.^

Share this post


Link to post
Share on other sites
Guest

-bump- Back to the top. Could I get a sticky? :)

Share this post


Link to post
Share on other sites
Guest

I'd sticky this if I could!

Bump!

Share this post


Link to post
Share on other sites
Guest

Back to the top.

Share this post


Link to post
Share on other sites
Guest

Last bump for the night. Show some support guys :).

Share this post


Link to post
Share on other sites
Guest

This guide is wonderful. I'll be testing it under a full server later tonight. I can say that right now (with about ten people) the server loads about 10 times faster. Thanks!

Thanks for the support!

Share this post


Link to post
Share on other sites

I tweaked and I have an issue where some people are getting signature check timeout now that weren't right before I made all of these changes. Any idea which settings would be causing the sig check packets to get dropped/timeout?

It's weird. It doesn't happen to everyone, or even a lot of people, but at least 5 people in my clan can no longer play on it. Not sure which options would be making that time out though. Any idea? Thanks in advance.

Share this post


Link to post
Share on other sites

Anyone with the slightest bit of networking knowledge would know that increasing packetsize won't help. Packets with a size of more than 1500 MTU will most certainly get dropped by the next hop towards the client.

Share this post


Link to post
Share on other sites

Anyone with the slightest bit of networking knowledge would know that increasing packetsize won't help. Packets with a size of more than 1500 MTU will most certainly get dropped by the next hop towards the client.

Or worse it'll cause a lot of TCP fragementation that you'll have to accept 2 packets per requiest and process the overhead of that.

Some obvious things missing here:

- make sure your are running the latest version of firmware/driver on your NICs.

- enable TCP/UDP offloading

- disable input moderation (don't wait send that packet now)

- disable hyperthreading, gives overall throughput but typically higher latency.

I'm not a big fan of the TCPOptimizer for one specific reason, it requires you to think about what latency requirements you want to allow on your servers. Proper calcualtion of RWIN size requires the correct BDP to be set, which means you have to define the max latency. Setting a really large RWIN value could also create issues if the TCP window size offset gets thrown on the floor by your router/switch/server on the path between you and the server.

In short I'd prefer to leave TCP auto tuning turned on.

TCP Auto-Tuning

To turn off the default RWIN auto tuning behavior, (in elevated command prompt) type:

netsh int tcp set global autotuninglevel=disabled

The default auto-tuning level is "normal", and the possible settings for the above command are:

disabled
: uses a fixed value for the tcp receive window. Limits it to 64KB (limited at 65535).

highlyrestricted
: allows the receive window to grow beyond its default value, very conservatively

restricted
: somewhat restricted growth of the tcp receive window beyond its default value

normal
: default value, allows the receive window to grow to accommodate most conditions

experimental
: allows the receive window to grow to accommodate extreme scenarios (not recommended, it can degrade performance in common scenarios, only intended for research purposes. It enables
values of over 16 MB)

  • Like 1

Share this post


Link to post
Share on other sites

A lot of people focus on the network, network network.

I work with complex protocols every day (BGP, MPLS mostly) on ISP side. Yes, this game uses UDP that's a given. TCP is there for browser stuff, and (im 3 days in) after running wireshark on some test it's using both. Whatever, not concerned with that.

Perhaps the biggest jump, I notice on any server I build in our datacenter is hard drives. If you are still running 15k scsi sas drives or whatever, that's still OK. If you REALLY want to increase your load times, then you better invest in a raided

SSD set up. I just set up a 3x raid 0 + 1 (so 3 SSD drives, datacenter grade writes at over 640 and reads at over 700) and that BLAZED anything I could throw at it. 3 drives all sharing the load, and doing simple mirroring over to an exact set. All that goes out to a SAN (storage area network) via fiber cards.

If you truly have a 100mbit connection (most people don't lol, they assume the nic says 100mb that's what they have) is it REALLY a 100mbit connection? The upload is the most important, I've seen 100 download and then.....20 upload. Most of the time if you are running a ds3 or higher (48mb connection) you will get duplex.

If ALL That bandwidth is yours then that's awesome, if you are in a datacenter then no way are you getting that hah.

The foundation of the server is going to be CPU / RAM and STORAGE. It's silly when I see dual quad cores, 32 GB of ram, and then......15k raid 5 drive arrays. Compare that and just replace those drives with SSD and it's insane boost in performance, we are talking 50 to 200% difference in access speeds.

Of course, yea don't take my word for it I just started playing yesterday but I just wanted to add this in there, relying on 15 years of datacenter / network engineering / server etc experience. I didn't see ANY mention of SSD (And yes if you do go SSD, you will break them in about 1 to 2 years, it's an ongoing investment if you will IF you are HEAVY usage, our record so far is 2 years 3 months hah). :)

-Nick

Share this post


Link to post
Share on other sites

A lot of people focus on the network, network network.

I work with complex protocols every day (BGP, MPLS mostly) on ISP side. Yes, this game uses UDP that's a given. TCP is there for browser stuff, and (im 3 days in) after running wireshark on some test it's using both. Whatever, not concerned with that.

Perhaps the biggest jump, I notice on any server I build in our datacenter is hard drives. If you are still running 15k scsi sas drives or whatever, that's still OK. If you REALLY want to increase your load times, then you better invest in a raided

SSD set up. I just set up a 3x raid 0 + 1 (so 3 SSD drives, datacenter grade writes at over 640 and reads at over 700) and that BLAZED anything I could throw at it. 3 drives all sharing the load, and doing simple mirroring over to an exact set. All that goes out to a SAN (storage area network) via fiber cards.

If you truly have a 100mbit connection (most people don't lol, they assume the nic says 100mb that's what they have) is it REALLY a 100mbit connection? The upload is the most important, I've seen 100 download and then.....20 upload. Most of the time if you are running a ds3 or higher (48mb connection) you will get duplex.

If ALL That bandwidth is yours then that's awesome, if you are in a datacenter then no way are you getting that hah.

The foundation of the server is going to be CPU / RAM and STORAGE. It's silly when I see dual quad cores, 32 GB of ram, and then......15k raid 5 drive arrays. Compare that and just replace those drives with SSD and it's insane boost in performance, we are talking 50 to 200% difference in access speeds.

Of course, yea don't take my word for it I just started playing yesterday but I just wanted to add this in there, relying on 15 years of datacenter / network engineering / server etc experience. I didn't see ANY mention of SSD (And yes if you do go SSD, you will break them in about 1 to 2 years, it's an ongoing investment if you will IF you are HEAVY usage, our record so far is 2 years 3 months hah). :)

-Nick

Of course SSD will boost performance, they are about 50 times faster per drive.

I work with datacenters and disaster recovery etc, and I thought I should mention that running SSDs mirrored or in any redundancy raid is pointless if you have the same type of drive. Since SSDs have a short lifespan, based on maximum read/writes, all drives will fail shortly after eachother when you hit that limit. So, mirroring is a waste when it comes to SSD.

  • Like 1

Share this post


Link to post
Share on other sites

You will also need to add this value in your dedicated server ARMA2's CLIENT configuration. (Documents/ArmA 2/ArmA2OA.cfg)

Would everyone need to do this on their client? Or just the server host?

Share this post


Link to post
Share on other sites

A lot of people focus on the network, network network.

I work with complex protocols every day (BGP, MPLS mostly) on ISP side. Yes, this game uses UDP that's a given. TCP is there for browser stuff, and (im 3 days in) after running wireshark on some test it's using both. Whatever, not concerned with that.

Perhaps the biggest jump, I notice on any server I build in our datacenter is hard drives. If you are still running 15k scsi sas drives or whatever, that's still OK. If you REALLY want to increase your load times, then you better invest in a raided

SSD set up. I just set up a 3x raid 0 + 1 (so 3 SSD drives, datacenter grade writes at over 640 and reads at over 700) and that BLAZED anything I could throw at it. 3 drives all sharing the load, and doing simple mirroring over to an exact set. All that goes out to a SAN (storage area network) via fiber cards.

If you truly have a 100mbit connection (most people don't lol, they assume the nic says 100mb that's what they have) is it REALLY a 100mbit connection? The upload is the most important, I've seen 100 download and then.....20 upload. Most of the time if you are running a ds3 or higher (48mb connection) you will get duplex.

If ALL That bandwidth is yours then that's awesome, if you are in a datacenter then no way are you getting that hah.

The foundation of the server is going to be CPU / RAM and STORAGE. It's silly when I see dual quad cores, 32 GB of ram, and then......15k raid 5 drive arrays. Compare that and just replace those drives with SSD and it's insane boost in performance, we are talking 50 to 200% difference in access speeds.

Of course, yea don't take my word for it I just started playing yesterday but I just wanted to add this in there, relying on 15 years of datacenter / network engineering / server etc experience. I didn't see ANY mention of SSD (And yes if you do go SSD, you will break them in about 1 to 2 years, it's an ongoing investment if you will IF you are HEAVY usage, our record so far is 2 years 3 months hah). :)

-Nick

I hardly see any disk IO while the server is running, and 100/100 connection is not that uncommon here in sweden.

(yes its a home connection, not a shared one..)

2104856560.png

A ssd would certainly help at server startup, but while the server is up and running there is hardly any disk IO at all.

  • Like 1

Share this post


Link to post
Share on other sites

ArmA 2 servers generally do all of the AI processing for the players. so a well optimized mission/AI routines and great CPU will help much much more than an SSD.

Also, this guide causes awful desync in vehicles on a fresh startup with less than 10 players. did some tweaking and it's much better, although it's pretty much back to the way it was before.

Share this post


Link to post
Share on other sites

ArmA 2 servers generally do all of the AI processing for the players. so a well optimized mission/AI routines and great CPU will help much much more than an SSD.

Also, this guide causes awful desync in vehicles on a fresh startup with less than 10 players. did some tweaking and it's much better, although it's pretty much back to the way it was before.

Would you care to share your tweaks with the community?

Share this post


Link to post
Share on other sites

This didnt work at all, our servers got 1-3 min loading times when we had 2-5 sec from the start.

Had the exakt same problem. Went back into TCPoptimizer, klicked optimal and restarted. Now it works fine.

Seems like the section of the guide where it says "There are a few other settings that are not enabled by default with the 'optimal'" should be ignored since it brakes things.

Share this post


Link to post
Share on other sites

How big is the local database size? of one server. Cos screw SSD! RamDrive is the way to go, if you want your hdd to keep up with the Multi threading challenge that Arma 2 poses to the system.

Faster access means faster the query can be ready and rest of the process can continue.

Share this post


Link to post
Share on other sites

Hey guys,

Does anyone know what might be causing constant/very frequent Yellow + Red desync chains? This happens for all players on my server, 50+ players...

The server is running at 20% CPU load, 10% memory load, 15% bandwidth usage (using 8.0 Mbps down, 15.0 Mbps up), so I'm confident it is not resource related.

I have restarted the box but people still are getting very bad desyncs. Today is the first day of moving my old server onto this new sever, we have tested the server previously and did not have any issues, however now with 50-60 players we have very, very bad desyncs.

Map = Chernarus.

DayZ = 1.7.5.1.

DayZCC = 5.9.1.0

Arma Beta = 101480

Internet connection: 100 Mbit download + 100 Mbit uplink. (tested on speedtest.net and got 95.0 down, 85.0 up)

Specs of the server:

CPU (Processor):Intel Xeon E3 1245v2 (3.4+ GHz, 3.8GHz boost)

RAM (Memory): 32,768Mb (32GB) ECC DDR3

HDD (Hard Disk): 2 x 120GB Intel SSD's

RAID Configuration: Raid 0

Operating Systems: Windows Server 2008 R2

Included Bandwidth: Unmetered

Location: Roubaix, France, EU

I have followed each part of the first post on page 1 precisely, including the TCPOptimizer, and it has not made any difference. Really don't know what to do....

Does anyone have any advice please?

Thank you! :D

Edited by -Panda

Share this post


Link to post
Share on other sites

there is a lot of good suggestions here, but i wouldnt recommend using /REALTIME due to it causing some slow downs for other processes and might cause even lower server fps over time, also fiddling too much with the packet settings might not be too wise either, there have been a lot of tests with a ton of different settings and sticking to the default is usually the best in most cases

either way, good tut :)

  • Like 1

Share this post


Link to post
Share on other sites

I did a full reinstall on my server, including win server 2008, and now everything is working well, no lag, no desync issues. very strange :o Used the settings in the TCPOptimizer too :)

Fingers crossed but so far so good :D

Share this post


Link to post
Share on other sites
Guest Dwarden

the easter egg entry is non-sense ... packet size higher than MTU would result into fragmented packets

the reason it was introduced was to allow smaller packet sizes in case of routing troubles

also it needs to be set on both client and server ... (if the client has has 1400 it will send and accept only 1400 or less)

i definitely do wonder where the person who wrote it came to such conclusion

next time read official BIKI documentation: https://community.bistudio.com/wiki/basic.cfg#Networking_Tuning_Options

Edited by Dwarden

Share this post


Link to post
Share on other sites

Hello mates, first of all thank you a lot for this post, and sorry for my bad English!

This is the hardware of our Dedicated server:

CPU (Processor):Intel Xeon E3 1245v2 (3.4+ GHz, 3.8GHz boost)

RAM (Memory): 32,768Mb (32GB) ECC DDR3HDD

(Hard Disk): 2 x 2000GB's

RAID Configuration: Raid 0

Operating Systems: Windows Server 2008 R2

Included Bandwidth: Unmetered

Location: Roubaix, France, EU

OS linux host guest virtualbox win server 2008 r2

Well we're facing a lot of unstability problems, we have 2 servers, 1 with 40+ Players and other wiht 20 players, the problem is that we have a lot of unstability, when the first server reachs 40+ players it has a continous Yellow chain o red chain, lots of lags and the ping of the players increases significantly, I usually have 30ping and when the server is full my ping is 120-300...We are going to apply this suggested changes to our configuration but it's there any other help or suggestion to us to fix this problems? Any help will be appreciated... Thank you very much!!

Our server runs on a Linux with a Virtualbox VM running a Windows 2008R2 The 2 servers are on the same VM

Edited by Jimmorz

Share this post


Link to post
Share on other sites

Please sign in to comment

You will be able to leave a comment after signing in



Sign In Now

×