update README and backup server utils

This commit is contained in:
HF 2021-06-04 16:41:47 +02:00
parent 706fb2729e
commit 399d548794
10 changed files with 41 additions and 156 deletions

View File

@ -188,10 +188,8 @@ pm2 stop web
```
### If using Cloudflare / Reverse Proxy
In order to get the real IP and not use the cloudflare Proxy IP for placing pixels, we filter those out. The cloudflare IPs are in src/utils/cloudflareip.js and used in src/utils/ip.js. If for some reason cloudflare ads more IPs to it, you can see them at https://www.cloudflare.com/ips/ and add them.
If you use any other Reverse Proxy, you can define it's IPs there too.
If USE\_XREALIP is set, we take the IP from the X-Real-Ip header without checking for cloudflare IPs. Use this if you have pixelplanet running behind nginx use the nginx set\_realip module to give us the client ip on the X-Real-Ip header. And be sure to also forward X-Forwarded-Port and set X-Forwarded-Proto.
If USE\_XREALIP is set, we take the IP from the X-Real-Ip header. Use this if you have pixelplanet running behind nginx and cloudflare. Use the nginx set\_realip module to give us the client ip on the X-Real-Ip header (and set it up so that just cloudflare are trusted proxy IPs, or else players could fake their IP). And be sure to also forward X-Forwarded-Port and set X-Forwarded-Proto.
### Auto-Start
To have the canvas with all it's components autostart at systemstart,
@ -220,11 +218,11 @@ The backup script gets built when building pixelplanet and also gets copied to b
node backup.js REDIS_URL_CANVAS REDIS_URL_BACKUP BACKUP_DIRECTORY [INTERVAL] [COMMAND]
```
Make sure to get the order right, because the backup redis instance will be overwritten every hour.
Make sure to get the order right, because the backup redis instance will be overwritten every day.
Interval is the time in minutes between incremential backups. If interval is undefined, it will just make one backup and then exit.
If command is defined, it will be executed after every backup (just one command, with no arguments, like "dosomething.sh"), this is useful for synchronisation with a storage server i.e..
If command is defined, it will be executed after every backup (just one command, with no arguments, like "dosomething.sh"), this is useful for synchronisation with a storage server i.e.. Look into utils/backupServer for some scripts and info on how to run it.
Alternatively you can run it with pm2, just like pixelplanet. An example ecosystem-backup.example.yml file will be located in the build directory.
You can run it with pm2, just like pixelplanet. An example ecosystem-backup.example.yml file will be located in the build directory.
Note:
- You do not have to run backups or historical view, it's optional.

View File

@ -45,14 +45,6 @@ downloads the history from an canvas area between two dates.
Useage: `historyDownload.py canvasId startX_startY endX_endY start_date end_date
This is used for creating timelapses, see the cmd help to know how
## historyCopy.py
same as historyDownload, just that its designed for running on the storage server by copying the chunks. Also instead of time you define the amount of days you want to make a timelapse of.
## backupSync.sh
shell script that can be launched with backup.js to sync to a storage server after every backup. It uses rsync which is much faster than ftp, sftp or any other methode.
### Note:
Backups aka historical view does use lots of files and might eventually hit the inode limit of your file system, consider to use mksquashfs to compress past backups into one read-only image and mount them
## liveLog.sh
shell script that watches the pixel.log file and outputs the stats of the current IPs placing there
Usage: `./liveLog.sh LOGFILE CANVASID STARTX_STARTY ENDX_ENDY`

View File

@ -0,0 +1,20 @@
# Scripts and notes for backup (historical view) server
## backupSync.sh
shell script that can be launched with backup.js to sync to a storage server after every backup. It uses rsync which is much faster than ftp, sftp or any other methode.
## Notes
Historical view and backups are best to be stored on a different server.
Backups do generate lots of files. Every day one full backup is made with png files for every single file - and incremential backups all 15min into png files with just the changed pixels since the last full backup set.
If the filesystem has a limit on inodes, this is very likely to be hit within a year or so by the amount of small files.
The following scripts are to mitigate this and to decrease disk usage.
## hardlink.sh <daily-backup-folder-1> <daily-backup-folder-2>
Compares the full-backup tile files from one day to another and creates hardlinks on equal tiles, which significantly reduces the numbers of used inodes and disk space used.
This uses the hardlink_0.3 util from https://jak-linux.org/projects/hardlink/ which ships with current debian and ubuntu, but in a different version on other distributions.
## mksquashfs
Backups from a whole month can be archived into a mountable read-only image with sqashfs.
Squashfs compresses the data, including inodes and reolves all duplicates. However, when compressing it needs to parse all inodes, which takes lots of RAM (at least 256 bytes per inode - we got millions of files).
We use the arguments `-b 8192 -no-xattrs -no-exports -progress`, where no-export is neccessary in order to not hit memory limit when mounting multiple images.

View File

@ -0,0 +1,17 @@
#!/bin/bash
DIR=$1
PREV_DIR=$2
echo "---Resolve duplicates to hardlinks---"
for CAN_DIR in `ls ${DIR}`; do
if [ -d "${DIR}/${CAN_DIR}/tiles" ] && [ -d "${PREV_DIR}/${CAN_DIR}/tiles" ]; then
for COL in `ls ${DIR}/${CAN_DIR}/tiles`; do
WDIR="${CAN_DIR}/tiles/${COL}"
echo "----${CAN_DIR} / ${COL}----"
if [ -d "${DIR}/${WDIR}" ] && [ -d "${PREV_DIR}/${WDIR}" ]; then
echo /usr/bin/hardlink --respect-name --ignore-time --ignore-owner "${DIR}/${WDIR}" "${PREV_DIR}/${WDIR}"
/usr/bin/hardlink --respect-name --ignore-time --ignore-owner --maximize "${DIR}/${WDIR}" "${PREV_DIR}/${WDIR}"
fi
done
fi
done

View File

@ -1,2 +0,0 @@
This script got used while increasing the size of the moon canvas.
It just adds additional empty tiles in the daily backup to pad the size in historical view, no big deal.

Binary file not shown.

Before

Width:  |  Height:  |  Size: 565 B

Binary file not shown.

Before

Width:  |  Height:  |  Size: 567 B

View File

@ -1,102 +0,0 @@
#!/bin/bash
# this script creates tiles in the backup folder for the moon canvas
# to be able to increase its size from 1024 to 4096
# If it wouldn't be padded by those additional tiles, it would show loading tiles
# in historical view at the parts that exceed the previous size
# (which wouldn't be too bad tbh. but let be save and put those there)
CANVAS=1
for DATEFOLDERS in `ls`
do
TILEFOLDER="${DATEFOLDERS}/${CANVAS}/tiles"
if [ -d "${TILEFOLDER}" ]
then
y=15
while [ $y -ge 0 ]
do
TILEYDIR="${TILEFOLDER}/${y}"
if [ ! -d "${TILEYDIR}" ]
then
mkdir "${TILEYDIR}"
fi
if [ $y -lt 4 ]
then
newy=$(( $y + 6 ))
NEWTILEYDIR="${TILEFOLDER}/${newy}"
echo "Move ${TILEYDIR} to ${NEWTILEYDIR}"
mv "${NEWTILEYDIR}" ./tmptiledir
mv "${TILEYDIR}" "${NEWTILEYDIR}"
mv ./tmptiledir "${TILEYDIR}"
x=15
while [ $x -ge 0 ]
do
TILE="${NEWTILEYDIR}/${x}.png"
if [ $x -lt 4 ]
then
newx=$(( $x + 6 ))
NEWTILE="${NEWTILEYDIR}/${newx}.png"
echo "Move ${TILE} to ${NEWTILE}"
mv "${NEWTILE}" ./tmptile.png
mv "${TILE}" "${NEWTILE}"
mv ./tmptile.png "${TILE}"
else
if [ ! -f "${TILE}" ]
then
cp ./empty.png "${TILE}"
echo "Create ${TILE}"
fi
fi
x=$(( $x - 1 ))
done
else
x=0
while [ $x -lt 16 ]
do
TILE="${TILEYDIR}/${x}.png"
if [ ! -f "${TILE}" ]
then
cp ./empty.png "${TILE}"
echo "Create ${TILE}"
fi
x=$(( $x + 1 ))
done
fi
y=$(( $y - 1 ))
done
fi
done
for DATEFOLDERS in `ls -d */`
do
CANVASFOLDER="${DATEFOLDERS}${CANVAS}"
if [ -d "${CANVASFOLDER}" ]
then
for TIMES in `ls ${CANVASFOLDER}`
do
if [ "${TIMES}" != "tiles" ]
then
TIMEFOLDER="${CANVASFOLDER}/${TIMES}"
for y in `ls -r "${TIMEFOLDER}"`
do
newy=$(( $y + 6 ))
TILEYDIR="${TIMEFOLDER}/${y}"
NEWTILEYDIR="${TIMEFOLDER}/${newy}"
echo "Move ${TILEYDIR} to ${NEWTILEYDIR}"
mv "${TILEYDIR}" "${NEWTILEYDIR}"
for XNAME in `ls -r ${NEWTILEYDIR}`
do
x=`echo ${XNAME} | sed 's/.png//'`
newx=$(( $x + 6 ))
TILE="${NEWTILEYDIR}/${x}.png"
NEWTILE="${NEWTILEYDIR}/${newx}.png"
echo "Move ${TILE} to ${NEWTILE} "
mv "${TILE}" "${NEWTILE}"
done
done
fi
done
fi
done

View File

@ -1,38 +0,0 @@
/* @flow */
// move chunks to the middle when changing size on moon canvas from 1024 to 4096
import redis from 'redis';
import bluebird from 'bluebird';
bluebird.promisifyAll(redis.RedisClient.prototype);
bluebird.promisifyAll(redis.Multi.prototype);
//ATTENTION Make sure to set the rdis URLs right!!!
const oldurl = "redis://localhost:6380";
const oldredis = redis.createClient(oldurl, { return_buffers: true });
const newurl = "redis://localhost:6379";
const newredis = redis.createClient(newurl, { return_buffers: true });
async function copyChunks() {
for (let x = 0; x < 5; x++) {
for (let y = 0; y < 5; y++) {
const oldkey = `ch:1:${x}:${y}`;
const newkey = `ch:1:${x + 6}:${y + 6}`;
const chunk = await oldredis.getAsync(oldkey);
if (chunk) {
const setNXArgs = [newkey, chunk];
await oldredis.sendCommandAsync('SET', setNXArgs);
await oldredis.delAsync(oldkey);
console.log("Created Chunk ", newkey);
}
const chunkl = await newredis.getAsync(oldkey);
if (chunkl) {
const setNXArgs = [newkey, chunkl];
await newredis.sendCommandAsync('SET', setNXArgs);
await newredis.delAsync(oldkey);
console.log("Created Chunk ", newkey);
}
}
}
}
copyChunks();