How to set max size for a specific file in linux

linux limit log file size
ulimit linux
nohup limit log size
logrotate linux
linux limit size of directory
logrotate command in linux
how to create a directory with specific size in linux
logrotate size

I have an application which will write to a particular log file until the user session is going on. What i am looking for is to put a max cap on the size of a log file so it does not grow beyond a particular size, 2 scenarios which will be useful are

  1. Any utility which keeps an eye on the log file and as soon as it reaches the max size start truncating the file content from the start so the application can keep appending the content at the end.

  2. Any utility by which while creating the file i can specify the max size of that file and when file reaches that maxsize it should simply not grow beyond that point.

What i don't want is

  1. To set up a cron job or a script which will monitor the file size after a particular interval of time (say 1 hour) and then delete its contents at that time.

As a shellscript:

while : ; do
    inotifywait -e modify "$file"
    filesize=$(du "$file")
    if  [ $filesize -gt $maxsize ] ; then
        tail -c $truncsize "$file" > /tmp/truncatedfile.$$
        mv /tmp/truncatedfile.$$ "$file"

Note that you might get some race conditions which may lead to losing log-data.

How to limit log file size using >>, How do I limit the file size of a log in Linux? to set up a cron job or a script which will monitor the file size after a particular interval of time (say 1 hour) and then delete its contents at that time. linux bash log-files rhel7 share | improve this question | follow | | | |

How about truncate -s 10M log.txt?

Check man truncate for more details

A.4. Large File Support in Linux, may be specified in bytes (default), kilobytes (sizek), or megabytes (sizem). The another command to create a particular size file is fallocate.  Please note that you can only specify file sizes in bytes using fallocate command. To calculate the size for a specific file, for example 5MB, you need to do – 5*1024*1024=5242880.

A process that removes part of a file and then let you append more data is nearly never available on any system, even though it is possible, it's just not something one does. It could be done at kernel level and be really efficient, but I've never see that either. (i.e. the kernel would simply unlink inodes from the start of the file and have an offset in the first inode of the file for byte capability--opposed to page ability.)

On a Unix system, you can use mmap() and unmap() for that purpose. So when your app. determines that the file size went over a certain amount, it would have to read from the start of the file, determine the location of, for example, the 10,000th line of log, and then memmove() the rest over to the start. Finally, it would truncate the file and reopen it in append mode. This last step is a very important step...

// WARNING: code without any error checking
// if multiple processes may run in parallel, make sure to use a lock as well
int fd = open("/var/log/mylog.log", O_RDWR);
ssize_t size = lseek(fd.get(), 0, SEEK_END);
lseek(fd.get(), 0, SEEK_SET);
char * start = (char *)mmap(nullptr,
                            PROT_READ | PROT_WRITE, MAP_SHARED,
char * end = start + size;
char * l10000 = start; // search line 10,000
for(int line(0); line < 10000; ++line)
    for(; l10000 < end && *l10000 != '\n'; ++l10000);
    if(*l10000 == '\n')
        ++l10000; // skip the '\n'
ssize_t new_size = end - l10000;
memmove(start, l10000, new_size);
truncate(fd, new_size);

(example found in sendmail::dequeue() on GitHub which includes all the error checking not found here.)

IMPORTANT: the memmove() call is going to be slow, especially on a rather large log file.

Note that most of the time, when a process opens a log file, it keeps it opened and that means changing the file under its feet won't do much good. Actually, in the mmap() example here, you would create a gap with many zeroes (\0 characters) between the moved data and the next write if you don't make sure to close and reopen the log (not shown in the code).

So, it's doable in code (here in C++, you could easily get that to compile in C, though.) However, if you just want to use bash, logrotate is certainly your best bet. However, by default, at least on Ubuntu, logrotate runs only once per day. You can change that specifically for the users who use your application or system wide.

At least you can run it hourly by moving or copying the logrotate script like so:

sudo cp /etc/cron.daily/logrotate /etc/cron.hourly/logrotate

You can also setup a per minute CRON file which runs that script. To edit the root crontab file:

sudo crontab -u root -e

Then add one line:

* * * * *   root    /etc/cron.daily/logrotate

Make sure to test and see that it works as expected. If you add such, you could also remove the /etc/cron.daily/logrotate script from there so it does not try to run it twice (once on the 'daily' run and once on the per minute run.)

Just be aware that there is a lingering bug in CRON as shown in my bug report to Ubuntu. It can cause memory problems when using CRON too much (like once a minute).

Also, as mentioned previously with the code sample above, you must reopen the log file. Just rotating won't do you any good unless the application either reopens the log file each time it wants to write to it, or it is told to rotate (i.e. close the old file and open the new one.) Without that rotation kick, the application will continue to append data to the old file, it does not matter what it is named. Unix remembers because it uses the inode once the file was opened and not the filename. (Under MS-Windows, you won't be able to rename without first closing all accesses to the file... that's very annoying!)

In many cases, you either restart the whole app. (because it's too dumb to know how to reopen the log), you send the app. a signal so it reopens the log file, or the app. is aware that the file changed, somehow...

If the app. is not capable or knowing, restarting will be your only option. That may be weird for a user if it has a UI.

Why is number of open files limited in Linux?, What is the maximum size of a file in Linux? The command you’ll want to use to get the actual size of a directory is du which is short for “disk usage”. The du command displays the amount of file space used by the specified files or directories. If the specified path is a directory, du will summarize disk usage of each subdirectory in that directory.

How to set max size for a specific file in linux, As a crude workaround you can mount file as a loop device and write to it. It will be fixed size. Something like that: dd if=/dev/zero of=/mnt/image bs=1M  You can increase the limit of opened files in Linux by editing the kernel directive fs.file-max. For that purpose, you can use the sysctl utility. Sysctl is used to configure kernel parameters at runtime. For example, to increase open file limit to 500000, you can use the following command as root: # sysctl -w fs.file-max=500000

How to set ulimit and file descriptors limit on Linux Servers, Size may be specified in bytes (default), kilobytes (sizek), or megabytes consider setting up logrotate to prune log files once they reach a certain size or age. Let’s assume our Linux server has reached the limit of maximum number of open files and want to extend that limit system wide, for example we want to set 100000 as limit of number of open files. Use sysctl command to pass fs.file-max parameter to kernel on the fly, execute beneath command as root user,

How to set a file size limit for a directory?, Learn how to increase ulimit and number of open files on Linux servers If this exceeds the default limit that is set, then one may face access control (-e) 0 file size (blocks, -f) unlimited pending signals (-i) 5731 max locked  There are lots of answers, but none explained nicely what else can be done. Looking into man pages for dd, it is possible to better specify the size of a file.. This is going to create /tmp/zero_big_data_file.bin filled with zeros, that has size of 20 megabytes :