manage long-term archiving of service's logs in IspConfig environment using a data collector
####Table of Contents
- Overview
- Module Description - What the module does and why it is useful
- Setup - The basics of getting started with [Modulename]
##Overview This module use a data collector service to archive service's logs. It's configured to manage log files in a IspConfig environment.
##Module Description Module is structured to use various data collectors and destinations. Actually only fluentd data collector and s3 destination plugin are available.
##Setup include ispconfig_logarchive
###Setup Requirements if used data_collector is fluentd it requires
- softecspa/puppet-fluentd module
if destination is s3 and aws/s3 parameters are not passed, default will be taken from global variables:
- $::aws_access_key,
- $::aws_secret_key
- $::s3_logarchive_bucket,
- $::s3_bucket_endpoint,
if destination is s3 and s3_logpath is not passed, default value will use
##Usage Each parameters for each managed logfile can be defined or overrided by passing an hash to the relative class parameter:
- $datacollector_$servicename_$(input|output)_opts
Ex1: if you want to define or override config parameter for apache2 log source in fluentd, you can do this using the hash parameter:
- fluentd_apache2_input_opts
Ex2: if you want to define or override config parameter for apache2 log destination in fluentd, you can do this using the hash parameter:
- fluentd_apache2_output_opts
NOTE: apache2 has also a filter level and only for apache2 is customizable the hash fluentd_apache_filter_opts
This are the default config options passed by default:
- apache2
- input
- path => /var/log/httpd/ispconfig_access_log
- pos_file => /var/tmp/apache2_fluentd.pos
- filter
- exclude1 => agent .*(dummy|nagios).*
- exclude2 => client $::logarchive_host_exclude_regexp
- output
- buffer_type => memory
- buffer_chunk_limit => 64m
- time_slice_format => %Y%m%d%H
- flush_interval => 15m
- postfix
- input
- path => /var/log/mail.log
- pos_file => /var/tmp/mail_fluentd.pos
- output
- buffer_type => memory
- buffer_chunk_limit => 64m
- time_slice_format => %Y%m%d%H
- flush_interval => 15m
- proftpd
- input
- path => /var/log/proftpd/proftpd.log
- pos_file => /var/tmp/proftpd_fluentd.pos
- output
- buffer_type => file
- buffer_path => /var/log/td-agent/buffer/s3_proftpd
- time_slice_format => %Y%m%d%H
- time_slice_wait => 5m
- buffer_chunk_limit => 32m
- proftpd_tls
- input
- path => /var/log/proftpd/tls.log
- pos_file => /var/tmp/proftpd_tls_fluentd.pos
- output
- buffer_type => file
- buffer_path => /var/log/td-agent/buffer/s3_proftpd_tls
- time_slice_format => %Y%m%d%H
- time_slice_wait => 5m
- buffer_chunk_limit => 32m
- auth.log
- input
- path => /var/log/auth.log
- pos_file => /var/tmp/auth_fluentd.pos
- output
- buffer_type => file
- buffer_path => /var/log/td-agent/buffer/s3_auth
- time_slice_format => %Y%m%d%H
- time_slice_wait => 5m
- buffer_chunk_limit => 32m
- puppet-run
- input
- path => /var/log/puppet/puppet-run.log
- pos_file => /var/tmp/puppet-run_fluentd.pos
- output
- buffer_type => file
- buffer_path => /var/log/td-agent/buffer/s3_puppet-run
- time_slice_format => %Y%m%d%H
- time_slice_wait => 50m
- buffer_chunk_limit => 32m
Actually only fluentd data collector and s3 destination plugin are available