Squid with Authentication on AD using Debian Jessie

What's up folks, here I will show you how to configure the Squid Proxy Server with authentication on AD and how to configure the lightsquid that will display the access.log in a good format in a web page. sam

Prepare your system with the following shell script http://wiki.douglasqsantos.com.br/doku.php/confinicialjessie_en if you do not use this script please, be aware that can be some problems during the progress.

Some variables that I will use in this how to.

  1. Proxy IP: 192.168.1.20
  2. AD IP: 192.168.1.246
  3. Domain: douglasqsantos.com.br
  4. Needs to enable the non-secure updates in the Windows DNS server to update the register of the Linux Machine.

Here we will work with some groups that you can use or it up to you to decide if you want to use another ones.

  1. warehouse-proxy
  2. attendance-proxy
  3. baixa-proxy
  4. bureau_d-proxy
  5. bureau_o-proxy
  6. cipa-proxy
  7. sales-proxy
  8. salesonline-proxy
  9. crm-proxy
  10. design-proxy
  11. dev-proxy
  12. direction-proxy
  13. dp-proxy
  14. facebook-proxy
  15. financial-proxy
  16. management-proxy
  17. maps-google-proxy
  18. marketing-proxy
  19. hr-proxy
  20. skype-proxy
  21. logistics-proxy
  22. youtube-proxy
  23. social-network-proxy

We need to change some system variables before start installing some packets

export DEBIAN_PRIORITY=critical
export DEBIAN_FRONTEND=noninteractive

We need to update the repositories and upgrade the system to be up-to-date.

aptitude update && aptitude dist-upgrade -y

We need to start installing the packets to support samba, winbind and the kerberos client.

aptitude install -y samba samba-common smbclient winbind krb5-config krb5-user libpam-krb5 libnss-winbind libpam-winbind 

Now let's come back the variables to the default value.

unset DEBIAN_PRIORITY
unset DEBIAN_FRONTEND

Now we need to make a backup of the /etc/resolv.conf before any changes in the file.

cp -Rfa /etc/resolv.conf{,.bkp}

Let's change the file as we need.

vim /etc/resolv.conf
search douglasqsantos.com.br
domain douglasqsantos.com.br
nameserver 192.168.1.246

Now we need to run a test of dns to make sure that the server is answering as we need.

nslookup douglasqsantos.com.br
Server:   192.168.1.246
Address:  192.168.1.246#53

Name: douglasqsantos.com.br
Address: 192.168.1.246

Make sure the your system clock and the clock of AD are the same, otherwise we will get a lot of problem with the Kerberos.

ntpdate -u a.usp.br

Now we need to make a backup of the kerberos client configuration file as follow.

cp -Rfa /etc/krb5.conf{,.bkp}

Now we need to change the file as follow.

vim /etc/krb5.conf
[libdefaults]
       default_realm = DOUGLASQSANTOS.COM.BR
       krb4_config = /etc/krb.conf
       krb4_realms = /etc/krb.realms
       kdc_timesync = 1
       ccache_type = 4
       forwardable = true
       proxiable = true
       v4_instance_resolve = false
       fcc-mit-ticketflags = true
       default_keytab_name = FILE:/etc/krb5.keytab
v4_name_convert = {
host = {
       rcmd = host
       ftp = ftp
}
plain = {
       something = something-else
}
}
        fcc-mit-ticketflags = true
[realms]
DOUGLASQSANTOS.COM.BR = {
        kdc = 192.168.1.246
        admin_server = 192.168.1.246:749
        default_server = 192.168.1.246
}
[domain_realm]
        .douglasqsantos.com.br = DOUGLASQSANTOS.COM.BR
        douglasqsantos.com.br  = DOUGLASQSANTOS.COM.BR
[login]
        krb4_convert = true
        krb4_get_tickets = false
[kdc]
        profile = /etc/krb5kdc/kdc.conf
[appdefaults]
pam = {
        realm = DOUGLASQSANTOS.COM.BR
        ticket_lifetime = 1d
        renew_lifetime = 1d
        forwardable = true
        proxiable = false
        retain_after_close = false
        minimum_uid = 1000
        try_first_pass = true
        ignore_root = true
        debug = false
}
[logging]
        default = file:/var/log/krb5libs.log
        kdc = file:/var/log/krb5kdc.log
        admin_server = file:/var/log/kadmind.log

Now we need to make a backup of samba configuration file.

cp -Rfa /etc/samba/smb.conf{,.bkp}

Now we need to change the file, it needs to be like below.

vim /etc/samba/smb.conf
[global]
  workgroup = DOUGLASQSANTOS
  realm = DOUGLASQSANTOS.COM.BR
  netbios name = PROXY
  server string = PROXY
  security = ADS
  auth methods = winbind
  kerberos method = secrets and keytab
  winbind refresh tickets = yes
  load printers = No
  printing = bsd
  printcap name = /dev/null
  disable spoolss = Yes
  local master = No
  domain master = No
  winbind cache time = 15
  winbind enum users = Yes
  winbind enum groups = Yes
  winbind use default domain = Yes
  idmap config * : range = 10000-15000
  idmap config * : backend = tdb
  template shell = /bin/bash
  template homedir = /home/%U

Now we need to make another change in order of ntml_auth works properly it needs to be the grant of winbind to work, then let's add the squid into winbind_priv group.

gpasswd -a proxy winbindd_priv 
Adding user proxy to group winbindd_priv

Now we need to change the permission of a winbindd_privileged directory that store the pipe of winbind

chgrp winbindd_priv /var/lib/samba/winbindd_privileged

Now the system will expect that the pipe is store in /var/lib/samba/winbindd_privileged/pipe and it does not happen, so let's adjust it.

ln -s /var/lib/samba/winbindd_privileged/pipe /var/run/samba/winbindd_privileged/pipe

Now we need to make a backup of /etc/nsswitch.conf that will control where the system need to make a search about the users and groups.

cp /etc/nsswitch.conf{,.bkp}

Now we need to change the file as follow.

vim /etc/nsswitch.conf
passwd:         compat winbind
group:          compat winbind
shadow:         compat
 
hosts:          files dns
networks:       files
 
protocols:      db files
services:       db files
ethers:         db files
rpc:            db files
 
netgroup:       nis

Let's restart both the samba server and winbind to reload the new configuration.

systemctl restart samba-ad-dc
systemctl restart winbind

We need to create a new ticket to kerberos

kinit douglas@DOUGLASQSANTOS.COM.BR

Let's list the ticket that was generated.

klist
Ticket cache: FILE:/tmp/krb5cc_0
Default principal: douglas@DOUGLASQSANTOS.COM.BR

 Valid starting       Expires              Service principal
13-08-2015 10:00:06  13-08-2015 20:00:06  krbtgt/DOUGLASQSANTOS.COM.BR@DOUGLASQSANTOS.COM.BR
  renew until 14-08-2015 10:00:04

Now we need to join the machine to domain as follow.

net ads join createupn=host/$(hostname).douglasqsantos.com.br@douglasqsantos.com.br -S dc01.douglasqsantos.com.br -k
Using short domain name -- DOUGLASQSANTOS
Joined 'PROXY' to dns domain 'douglasqsantos.com.br'

If something does not work as expected we can enable the debug mode and rerun if need to test with debug

net ads join createupn=host/$(hostname).douglasqsantos.com.br@douglasqsantos.com.br -S dc01.douglasqsantos.com.br -k --debuglevel=5

Above a used the variable hostname to get the hostname for our machine on kerberos database and use the -S to put our PDC and the -k to use the current ticket to kerberos.

We need to change the permission to krb5.keytab file like this

chmod 664 /etc/krb5.keytab

Let's restart samba and winbind services again to reload all configuration after the join.

systemctl restart samba-ad-dc
systemctl restart winbind

If you need to remove the machine from the domain we can use the following command line to do that.

net ads leave douglasqsantos.com.br -U Administrator
Enter Administrator's password:
Deleted account for 'DEBIAN' in realm 'douglasqsantos.com.br'

Now we need to check the trust relationship with the AD

wbinfo -t
checking the trust secret for domain DOUGLASQSANTOS via RPC calls succeeded

Now we can map all the users from the AD in the Linux box.

wbinfo -g
winrmremotewmiusers__
domain computers
domain controllers
schema admins
enterprise admins
cert publishers
domain admins
domain users
domain guests
group policy creator owners
ras and ias servers
allowed rodc password replication group
denied rodc password replication group
read-only domain controllers
enterprise read-only domain controllers
cloneable domain controllers
dnsadmins
dnsupdateproxy
ti-admin
matriz-diretoria
matriz-gerencia
matriz-administracao
matriz-logistica

In ordem to check all the users from the domain we can use the following command line.

wbinfo -u
administrator
guest
krbtgt
douglas.santos
susan.cris
karolayne.santos
hillary.santos

Here I will not use the kerberos authentication to log in the system so we need to disable this feature with the follow command.

pam-auth-update

Here leave checked only [*] Unix authentication and click in OK

A good practice after all the changes that we made so far is reboot the machine to reload everything.

reboot

Installing and Configuring the Squid Proxy Server

Let's update the repositories and let's install the squid proxy

aptitude update && aptitude install squid3 squid3-common squidclient -y

Now let's make a backup of /etc/squid3/squid.conf

cp -Rfa /etc/squid3/squid.conf{,.bkp}

Now let's remove everything from the squid.conf file

cat /dev/null > /etc/squid3/squid.conf

Agora vamos a configuração do squid

vim /etc/squid3/squid.conf
#/etc/squid3/squid.conf
# The socket addresses where Squid will listen for HTTP client requests.
http_port 3128

# Force to use the ipv4 to resolve dns first.
dns_v4_first on

# Email of local cache manager who will receive mail if the cache dies.
cache_mgr it@douglasqsantos.com.br

# A list of words which, if found in a URL, cause the object to be handled directly by this cache
hierarchy_stoplist cgi-bin ?

# Does not create cache of cgi requests
acl QUERY urlpath_regex cgi-bin \?

# Deny the cache of cgi requests
cache deny QUERY

# Objects greater than this size will not be attempted to kept in the memory cache
maximum_object_size 4096 KB

# Objects smaller than this size will NOT be saved on disk. The default is 0 KB, which means all responses can be stored.
minimum_object_size 0 KB

# Objects greater than this size will not be attempted to kept in the memory cache.
maximum_object_size_in_memory 64 KB

# Places a limit on how much additional memory squid will use as a memory cache of objects.
cache_mem 256 MB

# The cache by default continues downloading aborted requests which are almost completed (less than 16 KB remaining).
quick_abort_min -1 KB

# Some servers have been found to incorrectly signal the use of HTTP/1.0 persistent connections even on replies not
# compatible, causing significant delays.
detect_broken_pconn on

# HTTP clients may send a pipeline of 1+N requests to Squid using a single connection, without waiting for Squid to respond to the first
# of those requests. This option limits the number of concurrent requests Squid will try to handle in parallel
# WARNING: pipelining breaks NTLM and Negotiate/Kerberos authentication
pipeline_prefetch off

# Maximum number of FQDN cache entries.
fqdncache_size 1024

# usage: refresh_pattern [-i] regex min percent max [options]
# 'Min' is the time (in minutes) an object without an explicit expiry time should be considered fresh.
# 'Percent' is a percentage of the objects age (time since last modification age) an object without explicit expiry time will be considered fresh.
# 'Max' is an upper limit on how long objects without an explicit expiry time will be considered fresh.
refresh_pattern ^ftp:           1440    20%     10080
refresh_pattern ^gopher:        1440    0%      1440
refresh_pattern -i (/cgi-bin/|\?) 0     0%      0
refresh_pattern .               0       20%     4320


# The low-water mark for cache object replacement. Replacement begins when the swap (disk) usage is above the low-water mark and
# attempts to maintain utilization near the low-water mark.
cache_swap_low 90

# The high-water mark for cache object replacement. Replacement begins when the swap (disk) usage is above the low-water mark and
# attempts to maintain utilization near the low-water mark. As swap utilization gets close to high-water  mark object eviction becomes more aggressive.
cache_swap_high 95

# Configures whether and how Squid logs HTTP and ICP transactions. If access logging is enabled, a single line is logged for every matching HTTP or ICP request.
access_log /var/log/squid3/access.log squid

# This is where general information about Squid behavior goes.
cache_log /var/log/squid3/cache.log

# Logs the activities of the storage manager. Shows which objects are ejected from the cache, and which objects are saved and for how long.
cache_store_log /var/log/squid3/store.log

# Format: cache_dir Type Directory-Name Fs-specific-data [options]
# You can specify multiple cache_dir lines to spread the cache among different disk partitions.
# Type specifies the kind of storage system to use.
# 'Directory' is a top-level directory where cache swap files will be stored.
# cache_dir aufs Directory-Name Mbytes L1 L2 [options]
# 'Mbytes' is the amount of disk space (MB) to use under this directory.  The default is 100 MB.  Change this to suit your configuration.
#  Do NOT put the size of your disk drive here. Instead, if you want Squid to use the entire disk drive, subtract 20% and use that value.
# 'L1' is the number of first-level subdirectories which will be created under the 'Directory'.  The default is 16.
# 'L2' is the number of second-level subdirectories which will be created under each first-level directory. The default is 256
cache_dir aufs /var/spool/squid3 100 16 256

# Specifies the number of logfile rotations to make when you type 'squid -k rotate'. The default is 10, which will rotate with extensions 0 through 9.
# Note, from Squid-3.1 this option is only a default for cache.log, that log can be rotated separately by using debug_options.
# Note2, for Debian/Linux the default of logfile_rotate is zero, since it includes external logfile-rotation methods.
logfile_rotate 0

# Location of the host-local IP name-address associations database. Most Operating Systems have such a file on different default locations:
# Un*X & Linux:    /etc/hosts
hosts_file /etc/hosts

# Google recaptcha
acl google_recaptcha urlpath_regex ^\/recaptcha\/api.js
http_access allow google_recaptcha

# No cache
acl NOCACHE url_regex "/etc/squid3/rules/no_cache"
#no_cache deny NOCACHE
always_direct allow NOCACHE
cache deny NOCACHE

# Allow LAN to connect to LAN servers without authentication
acl douglas-networks dst "/etc/squid3/rules/douglas-networks"
http_access allow douglas-networks

# Allow LAN to connect to local domains without authentication
acl local-servers-domain dstdom_regex -i "/etc/squid3/rules/local-servers-domain"
always_direct allow local-servers-domain

# Allow access to websites that do not need to be authenticated such as website of the company
acl company dstdom_regex -i "/etc/squid3/rules/company-websites"
http_access allow company

# Allow access to websites that storage, switches and other devices need access to update
acl update-websites url_regex -i "/etc/squid3/rules/update-websites"
http_access allow update-websites

# Allow access to ip address that storage, switches and other devices need access to update
acl update-websites-dst dst "/etc/squid3/rules/update-websites-dst"
http_access allow update-websites-dst

# Allow some ip address clients that do not need to authenticate to access access the internet
acl clients-allowed src "/etc/squid3/rules/clientes-allowed"
http_access allow clients-allowed

# Allow some mac address clients that do not need to authenticate to access the internet
acl macaddress-allowed arp "/etc/squid3/rules/macaddress-allowed"
http_access allow macaddress-allowed

# Configuration of the Authentication using Winbind to get the information for AD

# Setting up the kind of authentication will be used in this case ntml_auth
# "children" numberofchildren [startup=N] [idle=N]
# The maximum number of authenticator processes to spawn (default 5).
# The startup= and idle= options permit some skew in the exact amount run. A minimum of startup=N will begin during startup and reconfigure.
# Squid will start more in groups of up to idle=N in an attempt to meet traffic needs and to keep idle=N free above those traffic needs up to
# the maximum. e.g(auth_param ntlm children 20 startup=0 idle=1)
# credentialsttl the maximum amount of time the credential stay in the logged in user cache since their last request.
# ttl=n TTL in seconds for cached results (defaults to 3600 for 1 hour)
# children-max=n Maximum number of acl helper processes spawned to service external acl lookups of this type. (default 20)
# children-startup=n Minimum number of acl helper processes to spawn during startup and reconfigure to service external acl lookups of this type. (default 0)
# children-idle=n Number of acl helper processes to keep ahead of traffic loads.
# %LOGIN  Authenticated user login name
auth_param ntlm program /usr/bin/ntlm_auth --helper-protocol=squid-2.5-ntlmssp
auth_param ntlm children 35
auth_param ntlm keep_alive off
#auth_param basic program /usr/bin/ntlm_auth --helper-protocol=squid-2.5-basic
#auth_param basic children 5
#auth_param basic realm DOUGLASQSANTOS.COM.BR
#auth_param basic credentialsttl 1 hours
external_acl_type ad_group ttl=600 children-max=35 %LOGIN /usr/lib/squid3/ext_wbinfo_group_acl

# Default ACLs
#acl manager proto cache_object
#acl localhost src 127.0.0.1/32
acl SSL_ports port 443 # https
acl SSL_ports port 563 # snews
acl SSL_ports port 873 # rsync
acl SSL_ports port 7071 #zimbra admin
acl Safe_ports port 80 # http
acl Safe_ports port 21 # ftp
acl Safe_ports port 443 563 # https, snews
acl Safe_ports port 70 # gopher
acl Safe_ports port 210 # wais
acl Safe_ports port 1025-65535 # unregistered ports
acl Safe_ports port 280 # http-mgmt
acl Safe_ports port 488 # gss-http
acl Safe_ports port 591 # filemaker
acl Safe_ports port 777 # multiling http
acl Safe_ports port 631 # cups
acl Safe_ports port 873 # rsync
acl Safe_ports port 901 # SWAT
acl Safe_ports port 1080
acl Safe_ports port 1863
acl Safe_ports port 8443 # https
acl Safe_ports port 5222 # gTalk
acl Safe_ports port 5223 # gTalk
acl Safe_ports port 47057 # torrent
acl purge method PURGE
acl CONNECT method CONNECT

# Access default
http_access allow manager localhost
http_access deny manager
http_access allow purge localhost
http_access deny purge
http_access deny !Safe_ports
http_access deny CONNECT !SSL_ports


# Mapping the groups from AD to Linux
#    ACL Name            Type     Name     AD Group
acl warehouse-proxy      external ad_group warehouse-proxy
acl attendance-proxy     external ad_group attendance-proxy
acl baixa-proxy          external ad_group baixa-proxy
acl bureau_d-proxy       external ad_group bureau_d-proxy
acl bureau_o-proxy       external ad_group bureau_o-proxy
acl cipa-proxy           external ad_group cipa-proxy
acl sales-proxy          external ad_group sales-proxy
acl salesonline-proxy    external ad_group salesonline-proxy
acl crm-proxy            external ad_group crm-proxy
acl design-proxy         external ad_group design-proxy
acl dev-proxy            external ad_group dev-proxy
acl direction-proxy      external ad_group direction-proxy
acl dp-proxy             external ad_group dp-proxy
acl facebook-proxy       external ad_group facebook-proxy
acl financial-proxy      external ad_group financial-proxy
acl management-proxy     external ad_group management-proxy
acl maps-google-proxy    external ad_group maps-google-proxy
acl marketing-proxy      external ad_group marketing-proxy
acl hr-proxy             external ad_group hr-proxy
acl skype-proxy          external ad_group skype-proxy
acl logistics-proxy      external ad_group logistics-proxy
acl youtube-proxy        external ad_group youtube-proxy
acl social-network-proxy external ad_group social-network-proxy

# Whitelists / Blacklists
acl warehouse-proxy-websites       dstdom_regex   -i "/etc/squid3/rules/warehouse-proxy-websites"
acl attendance-proxy-websites      dstdom_regex   -i "/etc/squid3/rules/attendance-proxy-websites"
acl baixa-proxy-websites           dstdom_regex   -i "/etc/squid3/rules/baixa-proxy-websites"
acl bureau_d-proxy-websites        dstdom_regex   -i "/etc/squid3/rules/bureau_d-proxy-websites"
acl bureau_o-proxy-websites        dstdom_regex   -i "/etc/squid3/rules/bureau_o-proxy-websites"
acl cipa-proxy-websites            dstdom_regex   -i "/etc/squid3/rules/cipa-proxy-websites"
acl sales-proxy-websites           dstdom_regex   -i "/etc/squid3/rules/sales-proxy-websites"
acl salesonline-proxy-websites     dstdom_regex   -i "/etc/squid3/rules/salesonline-proxy-websites"
acl crm-proxy-websites             dstdom_regex   -i "/etc/squid3/rules/crm-proxy-websites"
acl design-proxy-websites          dstdom_regex   -i "/etc/squid3/rules/design-proxy-websites"
acl dev-proxy-websites             dstdom_regex   -i "/etc/squid3/rules/dev-proxy-websites"
acl dp-proxy-websites              dstdom_regex   -i "/etc/squid3/rules/dp-proxy-websites"
acl facebook-proxy-websites        dstdom_regex   -i "/etc/squid3/rules/facebook-proxy-websites"
acl financial-proxy-websites       dstdom_regex   -i "/etc/squid3/rules/financial-proxy-websites"
acl maps-google-proxy-websites     dstdom_regex   -i "/etc/squid3/rules/maps-google-proxy-websites"
acl marketing-proxy-websites       dstdom_regex   -i "/etc/squid3/rules/marketing-proxy-websites"
acl hr-proxy-websites              dstdom_regex   -i "/etc/squid3/rules/hr-proxy-websites"
acl skype-proxy-websites           dst               "/etc/squid3/rules/skype-proxy-websites"
acl skype2-proxy-websites          dstdom_regex   -i "/etc/squid3/rules/skype2-proxy-websites"
acl logistics-proxy-websites       dstdom_regex   -i "/etc/squid3/rules/logistics-proxy-websites"
acl youtube-proxy-websites         dstdom_regex   -i "/etc/squid3/rules/youtube-proxy-websites"
acl social-networks-proxy-websites dstdom_regex   -i "/etc/squid3/rules/social-networks-proxy-websites"
acl downloads                      urlpath_regex  -i "/etc/squid3/rules/downloads"
acl streaming                      req_mime_type  -i "/etc/squid3/rules/streaming"
acl blocked-users                  proxy_auth     -i "/etc/squid3/rules/blocked-users"

# Below we have an acl to control the access using the time, but the proxy will not drop the connection that is already established.
#acl lunch-time time MTWHFAS 12:00-13:00

# The acl below will require an authentication from the users before allow them access anything
#acl users-authenticated proxy_auth REQUIRED

#The acl belo will set up the websites that will be available to access in the lunch time
#acl lunch-websites            url_regex     -i "/etc/squid3/rules/lunch-websites"

# Setting up the access for all the groups defined priviously
http_access deny blocked-users
http_access allow direction-proxy
http_access allow management-proxy !social-networks-proxy-websites
http_access allow facebook-proxy facebook-proxy-websites
http_access allow youtube-proxy youtube-proxy-websites
http_access allow skype-proxy skype-proxy-websites
http_access allow skype-proxy skype2-proxy-websites
http_access allow social-network-proxy social-networks-proxy-websites
http_access deny downloads
http_reply_access deny streaming
http_access allow warehouse-proxy warehouse-proxy-websites
http_access allow attendance-proxy attendance-proxy-websites
http_access allow baixa-proxy baixa-proxy-websites
http_access allow bureau_d-proxy bureau_d-proxy-websites
http_access allow bureau_o-proxy bureau_o-proxy-websites
http_access allow cipa-proxy cipa-proxy-websites
http_access allow sales-proxy sales-proxy-websites
http_access allow salesonline-proxy salesonline-proxy-websites
http_access allow crm-proxy crm-proxy-websites
http_access allow design-proxy design-proxy-websites
http_access allow dev-proxy dev-proxy-websites
http_access allow dp-proxy dp-proxy-websites
http_access allow financial-proxy financial-proxy-websites
http_access allow maps-google-proxy maps-google-proxy-websites
http_access allow marketing-proxy marketing-proxy-websites
http_access allow hr-proxy hr-proxy-websites
http_access allow logistics-proxy logistics-proxy-websites

# The following rule set the default policy to deny everything that cames here.
http_access deny all

# Allow replies to client requests. This is complementary to http_access.
http_reply_access allow all

# Allowing or Denying access to the ICP port based on define access lists
icp_access allow all

# Determins whether network access is permitted when satisfying a request.
miss_access allow all

# If you want to present a special hostname in error messages, etc, define this. Otherwise, the return value of gethostname() will be used.
visible_hostname proxy.douglasqsantos.com.br

# If you wish to create your own versions of the default error files to customize them to suit your company copy the error/template files
# to another directory and point this tag at them.
error_directory /usr/share/squid3/errors/en

# CSS Stylesheet to pattern the display of Squid default error pages.
err_page_stylesheet /etc/squid3/errorpage.css

# If you start Squid as root, it will change its effective/real UID/GID to the user specified below. The default is to change to UID of proxy.
cache_effective_user proxy

# Squid sets the GID to the effective user's default group ID (taken from the password file) and supplementary group list from the groups membership.
#cache_effective_group proxy

# By default Squid leaves core files in the directory from where it was started.
coredump_dir /var/spool/squid3

Now we need to create the directory that will store the rules

mkdir -p /etc/squid3/rules

Now let's create the files to control the access

Let's create the file to control the local networks

vim /etc/squid3/rules/no_cache
#/etc/squid3/rules/no_cache
(^|\.)douglas\.com\.br$

Let's create the file to control the local networks

vim /etc/squid3/rules/douglas-networks
#/etc/squid3/rules/douglas-networks
172.32.0.0/24
10.0.0.0/24
172.16.0.0/24

Now let's create the file that will control the local servers domain

vim /etc/squid3/rules/local-servers-domain
#/etc/squid3/rules/local-servers-domain
(^|\.)douglas\.com\.br$
(^|\.)douglas\.lan$
(^|\.)douglas\.wiki\.br$

Now we need to create the file that will control the company's websites or the websites that are always in use

vim /etc/squid3/rules/company-websites
#/etc/squid3/rules/company-websites
(^|\.)sp\.gov\.br$
(^|\.)squid-cache\.org$
(^|\.)transporteinterno\.com\.br$
(^|\.)zappos\.com$
(^|\.)portalpublicitario\.blog\.br$
(^|\.)kissmetrics\.com$
(^|\.)nspmotion\.com$
(^|\.)cdn\.atendimen\.to$
(^|\.)neoassist\.com$
(^|\.)stackoverflow\.com$
(^|\.)imgur\.com$
(^|\.)sstatic\.net$
(^|\.)googleapis\.com$
(^|\.)adzerk\.net$
(^|\.)quantserve\.com$

Now we need to create the file that will control the websites that enable update software like Microsoft and Dell

vim /etc/squid3/rules/update-websites
#/etc/squid3/rules/update-websites
(^|\.)mozilla\.org$
(^|\.)br\.pinterest\.com$
(^|\.)dell\.com$
(^|\.)dell\.com\.br$
(^|\.)f-secure\.com$
(^|\.)fpdownload\.macromedia\.com$
(^|\.)hplip\.com$
(^|\.)hplip\.com.\br$
(^|\.)javadl\.sun\.com$
(^|\.)myrp\.com\.br$
(^|\.)nfe\.com.\br$
(^|\.)openprinting\.com$
(^|\.)openprinting\.com\.br$
(^|\.)windowsupdate\.com$
(^|\.)ws\.ebs\.com\.br$
(^|\.)zsl\.com\.br$

Now we need to create the file that will control the websites used to update software by IP Address

vim /etc/squid3/rules/update-websites-dst
#/etc/squid3/rules/update-websites-dst
161.148.231.100
170.66.52.12
172.17.0.137
172.31.0.137
177.43.59.206 
186.215.184.100
186.233.149.109
186.233.149.114
189.21.117.102
189.21.117.46
200.150.7.135
200.175.6.50
200.186.46.18
200.195.146.52
200.196.152.40
200.198.239.19
200.199.34.41
200.201.160.0/20
200.202.195.98
200.218.208.91
200.218.209.91
177.39.17.200

Now we need to create the file that will control the IP address' client that does not need to be authenticated to access the Internet

vim /etc/squid3/rules/clientes-allowed
#/etc/squid3/rules/clientes-allowed
10.0.0.254/32

Now we need to create the file that will control the MAC address' client that does not need to be authenticated to access the Internet

vim /etc/squid3/rules/macaddress-allowed
#/etc/squid3/rules/macaddress-allowed
D4:F4:6F:1E:FF:11
20:C9:A0:B8:FE:7F

Now we need to create the file that will control the websites allowed to warehouse

vim /etc/squid3/rules/warehouse-proxy-websites
#/etc/squid3/rules/warehouse-proxy-websites
(^|\.)americanas\.com\.br$
(^|\.)avast\.com\.br$
(^|\.)bancocentral\.com\.br$
(^|\.)bigsupermercados\.com\.br$
(^|\.)carrefour\.com\.br$
(^|\.)casasbahia\.com\.br$
(^|\.)colombo\.com\.br$
(^|\.)condor\.com\.br$
(^|\.)decolar\.com\.br$
(^|\.)despegar\.com\.br$
(^|\.)e-planning\.com\.br$
(^|\.)extra\.com\.br$
(^|\.)guiamais\.com\.br$
(^|\.)hagah\.com\.br$
(^|\.)lojasmm\.com\.br$  
(^|\.)magazineluiza\.com\.br$
(^|\.)pontofrio\.com\.br$
(^|\.)sos102\.com\.br$   
(^|\.)staticontent\.com\.br$
(^|\.)telelistas\.com\.br$
(^|\.)tripadvisor\.com\.br$
(^|\.)voegol\.com\.br$
(^|\.)walmart\.com\.br$

Now we need to create the file that will control the websites allowed to attendance

vim /etc/squid3/rules/attendance-proxy-websites
#/etc/squid3/rules/attendance-proxy-websites
(^|\.)maislog\.com$
(^|\.)agencia\.red$
(^|\.)agtutoia\.com\.br$
(^|\.)aviancacargo\.com\.br$
(^|\.)azulcargo\.com\.br$
(^|\.)centraldepostagens\.com\.br$
(^|\.)correiomagico\.com$
(^|\.)correios\.com\.br$
(^|\.)eurocartoes\.com\.br$
(^|\.)fazenda\.gov\.br$
(^|\.)futuraimbativel\.com\.br$
(^|\.)itau\.com\.br$
(^|\.)jaraujo\.com\.br$
(^|\.)lancargo\.com$
(^|\.)nspenha\.com\.br$  
(^|\.)postalacf\.com\.br$
(^|\.)receita\.gov\.br$  
(^|\.)sfpoint\.com\.br$  
(^|\.)sintegra\.gov\.br$ 
(^|\.)sp\.gov\.br$       
(^|\.)tamcargo\.com\.br$ 
(^|\.)transminato\.com\.br$
(^|\.)transpencargas\.com\.br$

Now we need to create the file that will control the websites allowed to baixa

vim /etc/squid3/rules/baixa-proxy-websites
#/etc/squid3/rules/baixa-proxy-websites
(^|\.)atualcard\.com\.br$
(^|\.)cartoesmaisbarato\.com\.br$
(^|\.)cartoespaulista\.com\.br$
(^|\.)graficacores\.com\.br$
(^|\.)imprimarapido\.com\.br$
(^|\.)paulistacartoes\.com\.br$
(^|\.)squid-cache\.org$

Now we need to create the file that will control the websites allowed to bureau_d

vim /etc/squid3/rules/bureau_d-proxy-websites
#/etc/squid3/rules/bureau_d-proxy-websites
(^|\.)adobe\.com$
(^|\.)creativecommons\.com\.br$
(^|\.)google\.com\.br$
(^|\.)gov\.br$
(^|\.)gstatic\.com$
(^|\.)itau\.com\.br$
(^|\.)java\.com$
(^|\.)symantec\.com\.br$
(^|\.)verisign\.com\.br$

Now we need to create the file that will control the websites allowed to bureau_o

vim /etc/squid3/rules/bureau_o-proxy-websites
#/etc/squid3/rules/bureau_o-proxy-websites
(^|\.)adobe\.com$
(^|\.)google\.com\.br$
(^|\.)gov\.br$
(^|\.)gstatic\.com$
(^|\.)itau\.com\.br$
(^|\.)java\.com$
(^|\.)mozilla\.org$ 
(^|\.)verisign\.com\.br$

Now we need to create the file that will control the websites allowed to cipa

vim /etc/squid3/rules/cipa-proxy-websites
#/etc/squid3/rules/cipa-proxy-websites
(^|\.)ajatto\.com$
(^|\.)atualcard\.com\.br$
(^|\.)cartoesmaisbarato\.com\.br$
(^|\.)graficacores\.com\.br$
(^|\.)imprimarapido\.com\.br$
(^|\.)maislog\.com\.br$
(^|\.)paulistacartoes\.com\.br$
(^|\.)pubsites\.com.\br$
(^|\.)scitechinfo\.com\.br$
(^|\.)squid-cache\.org$

Now we need to create the file that will control the websites allowed to sales

vim  /etc/squid3/rules/sales-proxy-websites
#/etc/squid3/rules/sales-proxy-websites
(^|\.)addthis\.com$
(^|\.)ajatto\.com$
(^|\.)atualcard\.com\.br$
(^|\.)cartoesmaisbarato\.com\.br$
(^|\.)graficacores\.com\.br$
(^|\.)imprimarapido\.com\.br$
(^|\.)maislog\.com\.br$
(^|\.)newrelic\.com$
(^|\.)paulistacartoes\.com\.br$
(^|\.)pubsites\.com.\br$
(^|\.)scitechinfo\.com\.br$
(^|\.)socialprint\.com\.br$
(^|\.)tcdn\.com\.br$
(^|\.)topprecos\.com\.br$
(^|\.)ttray\.com\.br$

Now we need to create the file that will control the websites allowed to salesonline

vim /etc/squid3/rules/salesonline-proxy-websites
#/etc/squid3/rules/salesonline-proxy-websites
(^|\.)ajatto\.com$
(^|\.)atualcard\.com\.br$
(^|\.)cartoesmaisbarato\.com\.br$
(^|\.)graficacores\.com\.br$
(^|\.)imprimarapido\.com\.br$
(^|\.)lilogistica\.com$
(^|\.)lilogistica\.com\.br$ 
(^|\.)maislog\.com\.br$
(^|\.)paulistacartoes\.com\.br$
(^|\.)pubsites\.com.\br$
(^|\.)scitechinfo\.com\.br$
(^|\.)socialprint\.com\.br$

Now we need to create the file that will control the websites allowed to crm

vim /etc/squid3/rules/crm-proxy-websites
#/etc/squid3/rules/crm-proxy-websites
(^|\.)atualcard\.com\.br$
(^|\.)cartoesmaisbarato\.com\.br$
(^|\.)graficacores\.com\.br$
(^|\.)imprimarapido\.com\.br$
(^|\.)correios\.com$
(^|\.)lilogistica\.com$
(^|\.)lilogistica\.com\.br$ 
(^|\.)maislog\.com\.br$
(^|\.)neoassist\.com$
(^|\.)paulistacartoes\.com\.br$
(^|\.)pubsites\.com.\br$
(^|\.)scitechinfo\.com\.br$
(^|\.)surveymonkey\.com$
(^|\.)surveymonkey\.net$

Now we need to create the file that will control the websites allowed to design

vim /etc/squid3/rules/design-proxy-websites
#/etc/squid3/rules/design-proxy-websites
(^|\.)123rf\.com$
(^|\.)atualcard\.com\.br$
(^|\.)behance\.net$
(^|\.)cloudfront\.net$   
(^|\.)capitalcartoes\.com\.br$
(^|\.)cartoesmaisbarato\.com\.br$ 
(^|\.)cartoespaulista\.com\.br$
(^|\.)flickr\.com$       
(^|\.)freepik\.com$
(^|\.)graficacores\.com\.br$
(^|\.)imprimarapido\.com\.br$
(^|\.)lilogistica\.com$  
(^|\.)lilogistica\.com\.br$
(^|\.)maislog\.com\.br$
(^|\.)paulistacartoes\.com\.br$
(^|\.)pubsites\.com.\br$ 
(^|\.)scitechinfo\.com\.br$
(^|\.)socialprint\.com\.br$
(^|\.)thenounproject\.com$

Now we need to create the file that will control the websites allowed to devopment

vim /etc/squid3/rules/dev-proxy-websites
#/etc/squid3/rules/dev-proxy-websites
(^|\.)123rf\.com$
(^|\.)advertising\.com$  
(^|\.)adzerk\.net$
(^|\.)amazonaws\.com$    
(^|\.)mysql\.com$
(^|\.)w3schools\.com$    
(^|\.)paulirish\.com$    
(^|\.)cloudflare\.com$   
(^|\.)bootstrapcdn\.com$ 
(^|\.)cloudfront\.net&   
(^|\.)digicert\.com$     
(^|\.)fontastic\.me$     
(^|\.)fontawesome\.io$   
(^|\.)getcomposer\.com$  
(^|\.)getcomposer\.org$  
(^|\.)getcomposer\.org$  
(^|\.)git\.com$
(^|\.)github\.com$
(^|\.)github\.com$
(^|\.)github\.io$        
(^|\.)githubapp\.com$    
(^|\.)githubusercontent\.com$
(^|\.)google-analytics\.com$
(^|\.)googleapis\.com$
(^|\.)google\.com$

Now we need to create the file that will control the websites allowed to dp

vim /etc/squid3/rules/dp-proxy-websites
#/etc/squid3/rules/dp-proxy-websites
(^|\.)agricont\.com\.br$
(^|\.)alvoredo\.com\.br$ 
(^|\.)amil\.com\.br$
(^|\.)amilblue\.com\.br$ 
(^|\.)amilnet\.com\.br$  
(^|\.)aspal\.com\.br$    
(^|\.)assejepar\.com\.br$
(^|\.)bbseguros\.com\.br$
(^|\.)br\.linkedin\.com$ 
(^|\.)bradescoseguros\.com\.br$
(^|\.)caged\.com\.br$    
(^|\.)calculoexato\.com\.br$
(^|\.)cc\.ebs\.com\.br$$ 
(^|\.)centraljuridica\.com\.br$
(^|\.)chat2\.ebs\.com\.br$
(^|\.)chinapaper\.com\.br$
(^|\.)cieepr\.com\.br$   
(^|\.)cohabct\.com\.br$  
(^|\.)construtoramga\.com\.br$
(^|\.)consulta-sd\.datamec\.com\.br$
(^|\.)consultoresassociados\.com\.br$
(^|\.)contadez\.com\.br$ 
(^|\.)cordilheiragestao\.com\.br$ 
(^|\.)crcpr\.com\.br$
(^|\.)curitiba\.pr\.gov\.br$

Now we need to create the file that will control the websites allowed to who can access the facebook

vim /etc/squid3/rules/facebook-proxy-websites
#/etc/squid3/rules/facebook-proxy-websites
(^|\.)akamaihd\.net$
(^|\.)chat\.facebook\.com$
(^|\.)edge-chat\.facebook\.com$
(^|\.)facebook\.com$     
(^|\.)facebook\.com\.br$ 
(^|\.)facebook\.net$     
(^|\.)fbcdn\.net$ 

Now we need to create the file that will control the websites allowed to financial

vim /etc/squid3/rules/financial-proxy-websites
#/etc/squid3/rules/financial-proxy-websites
(^|\.)activex\.com\.br$
(^|\.)administradorafiel\.com\.br$
(^|\.)bancobrasil\.com\.br$
(^|\.)bb\.com\.br$       
(^|\.)bbseguros\.com\.br$
(^|\.)behance\.net$      
(^|\.)bootstrapcdn\.com$ 
(^|\.)bradesco.com\.br$  
(^|\.)bradesconetempresa\.b\.br$
(^|\.)bradescopj\.com\.br$
(^|\.)cartaometrocard\.com\.br$
(^|\.)cemig\.com\.br$    
(^|\.)cielo\.com\.br$    
(^|\.)cloudfront\.net$   
(^|\.)codecs\.com\.br$   
(^|\.)comodoca2\.com$    
(^|\.)comodoca\.com$     
(^|\.)copasa\.com\.br$   
(^|\.)copel\.com$
(^|\.)copel\.com\.br$    
(^|\.)correios\.com\.br$ 
(^|\.)falevono.\com\.br$ 
(^|\.)fazenda\.gov\.br$  
(^|\.)fisconet\.com\.br$
(^|\.)geotrust\.com$

Now we need to create the file that will control the websites allowed to who can access the google-maps

vim /etc/squid3/rules/maps-google-proxy-websites
#/etc/squid3/rules/maps-google-proxy-websites
(^|\.)google\.com\.br$
(^|\.)gstatic\.com$
(^|\.)tools\.google\.com$
(^|\.)mts1\.google\.com$ 
(^|\.)mts0\.google\.com$ 
(^|\.)googleapis\.com$   
(^|\.)ggpht\.com$        
(^|\.)maps\.google\.com\.br$
(^|\.)symcd\.com$
(^|\.)maps\.google\.com$ 
(^|\.)mt0\.google\.com$ 

Now we need to create the file that will control the websites allowed to marketing

vim /etc/squid3/rules/marketing-proxy-websites
#/etc/squid3/rules/marketing-proxy-websites
(^|\.)123rf\.com$
(^|\.)4over4\.com$
(^|\.)acasaqueaminhavoqueria\.com$
(^|\.)addthis\.com$      
(^|\.)agenciamestre\.com$
(^|\.)ajatto\.com$
(^|\.)allinmail\.com\.br$
(^|\.)alntransportes\.com\.br$
(^|\.)amazonaws\.com$
(^|\.)analytics\.com$    
(^|\.)ans\.com\.br$      
(^|\.)apoiograf\.com\.br$
(^|\.)assembla\.com$     
(^|\.)atualcard\.com\.br$
(^|\.)atualtec\.com$     
(^|\.)atwimg\.com$
(^|\.)axure\.com$
(^|\.)balsamiq\.com$
(^|\.)balsamiq\.com$     
(^|\.)balsamiq\.com$     
(^|\.)balsamiq\.com$     
(^|\.)barra\.globo\.com$ 
(^|\.)barra\.globo\.com$ 
(^|\.)behance\.net$
(^|\.)behance\.net$

Now we need to create the file that will control the websites allowed to hr

vim /etc/squid3/rules/hr-proxy-websites
#/etc/squid3/rules/hr-proxy-websites
(^|\.)agricont\.com\.br$
(^|\.)aliancadobrasil\.com\.br$
(^|\.)alvoredo\.com\.br$ 
(^|\.)amil\.com\.br$     
(^|\.)amilblue\.com\.br$ 
(^|\.)amilnet\.com\.br$  
(^|\.)aspal\.com\.br$    
(^|\.)assejepar\.com\.br$
(^|\.)bb\.com\.br$
(^|\.)bbseguros\.com\.br$
(^|\.)bbseguros\.com\.br$
(^|\.)br\.linkedin\.com$ 
(^|\.)bradesconetempresa\.b\.br$
(^|\.)bradescopj\.com\.br$
(^|\.)bradescoseguros\.com\.br$
(^|\.)caged\.com\.br$    
(^|\.)calculoexato\.com\.br$
(^|\.)cartaometrocard\.com\.br$
(^|\.)cc\.ebs\.com\.br$$ 
(^|\.)centraljuridica\.com\.br$
(^|\.)chat2\.ebs\.com\.br$
(^|\.)chinapaper\.com\.br$
(^|\.)cieepr\.com\.br$   
(^|\.)cohabct\.com\.br$
(^|\.)companheiro\.com\.br$

Now we need to create the file that will control the websites allowed to who can access the skype

vim /etc/squid3/rules/skype-proxy-websites
#/etc/squid3/rules/skype-endian-sites
111.221.74.0/24
111.221.77.0/24
157.55.130.0/24
157.55.235.0/24
157.55.56.0/24
157.56.52.0/24
213.199.179.0/24         
64.4.23.0/24
65.55.223.0/24

Now we need to create the file that will control the websites allowed to who can access the skype on the web browser

vim /etc/squid3/rules/skype-proxy-websites
#/etc/squid3/rules/skype2-endian-sites
(^|\.)live\.com$
(^|\.)msecnd\.net$
(^|\.)msocsp\.com$
(^|\.)omniroot\.com$
(^|\.)skype\.com$
(^|\.)skypeassets\.com$
(^|\.)symcd\.com$
(^|\.)trouter\.io$

Now we need to create the file that will control the websites allowed to logistics

vim /etc/squid3/rules/logistics-proxy-websites
#/etc/squid3/rules/logistics-proxy-websites
(^|\.)maislog\.com$
(^|\.)acfrs\.com\.br$
(^|\.)acfsc\.com\.br$
(^|\.)atlastranslog\.com\.br$
(^|\.)atlastransportes\.com\.br$
(^|\.)aviancacargo\.com\.br$
(^|\.)aviancacargo\.com\.br$
(^|\.)azulcargo\.com\.br$
(^|\.)azulcargo\.com\.br$
(^|\.)braspress\.com\.br$
(^|\.)correios\.com\.br$ 
(^|\.)correios\.com\.br$ 
(^|\.)detran\.com\.br$   
(^|\.)directtalk\.com\.br$
(^|\.)github\.io$
(^|\.)gollog\.com\.br$   
(^|\.)gollog\.com\.br$   
(^|\.)gov\.br\.com\.br$  
(^|\.)itau\.com\.br$     
(^|\.)lancargo\.com$     
(^|\.)linkmonitoramento\.com\.br$ 
(^|\.)microvisual\.com\.br$
(^|\.)oceanair\.com\.br$ 
(^|\.)postal\.com\.br$
(^|\.)postalnet\.com\.br$

Now we need to create the file that will control the websites allowed to who can access the youtube

vim /etc/squid3/rules/youtube-proxy-websites
#/etc/squid3/rules/youtube-proxy-websites
(^|\.)googlevideo\.com$
(^|\.)youtube\.com$
(^|\.)ytimg\.com$

Now we need to create the file that will control the websites allowed to who can access the social networks

vim /etc/squid3/rules/social-networks-proxy-websites
#/etc/squid3/rules/social-networks-proxy-websites
(^|\.)facebook\.com$
(^|\.)twitter\.com$
(^|\.)youtube\.com$
(^|\.)instagram\.com$

Now we need to create the file that will control extension files that will be deny to download

vim /etc/squid3/rules/downloads
#/etc/squid3/rules/downloads
\.ace$
\.af$
\.afx$
\.asf$
\.asx$
\.avi$
\.bat$
\.cmd$
\.com$
\.cpt$
\.divx$
\.dms$
\.dot$
\.dvi$
\.exe$
\.ez$
\.gl$
\.hqx$
\.kar$
\.lha$
\.lzh$
\.mov$
\.movie$
\.mp2$
\.mp3$
\.mpe$
\.mpeg$
\.mpg$
\.mpga$
\.pif$
\.qt$
\.rm$
\.rpm$
\.scr$
\.spm$
\.vbf$
\.vob$
\.vqf$
\.wav$
\.wk$
\.wma$
\.wmv$
\.wpm$
\.wrd$
\.wvx$
\.wz$

Now we need to create the file that will control mime type that will be deny to download

vim /etc/squid3/rules/streaming
#/etc/squid3/rules/streaming
^application/asx$
^application/vnd.ms-asf$
^application/vnd.ms-powerpoint$
^application/x-mplayer2$
^application/x-msn-messenger$
^application/x-pn-mpg$
^application/ymsgr$
^audio/asf$
^audio/basic$
^audio/mp3$
^audio/mp4$
^audio/mpeg$
^audio/mpeg3$
^audio/mpg$
^audio/vnd.rn-realaudio$
^audio/x-aiff$
^audio/x-mp3$
^audio/x-mpeg$
^audio/x-mpeg3$
^audio/x-mpegaudio$
^audio/x-mpegurl$
^audio/x-mpg$
^audio/x-ms-wma$
^audio/x-pm-realaudio-plugin$
^audio/x-pn-realaudio$
^audio/x-pn-realvideo$
^audio/x-realaudio$
^audio/x-wav$
^image/mpg$
^video/mp4v-es$
^video/mpeg$
^video/mpeg2$
^video/mpg$
^video/quicktime$
^video/x-mpeg$
^video/x-mpeg2a$
^video/x-mpg$
^video/x-ms-asf$
^video/x-ms-asf-plugin$
^video/x-ms-wm$
^video/x-ms-wmv$
^video/x-ms-wmx$
^video/x-msvideo$
^video/x-pn-realvideo$

Now we need to create the file that will control the users that will have the access deny by default accesing only de sites without authentication

vim /etc/squid3/rules/blocked-users
#/etc/squid3/rules/blocked-users
ruth.skolung

Now we need to assure that the winbind is able to get the users list from the AD, then we need to run a test as follow.

wbinfo -u
administrator
guest
krbtgt
douglas.santos
susan.cris
karolayne.santos
hillary.santos
[...]

Now we need to do the same process as above to assure that the winbind is able to get the group list from the AD.

Note: If you are using the same groups name as I, you need to create each one in the AD before go ahead.

wbinfo -g
winrmremotewmiusers__
domain computers
domain controllers
schema admins
enterprise admins
cert publishers
domain admins
domain users
domain guests
group policy creator owners
ras and ias servers
allowed rodc password replication group
denied rodc password replication group
read-only domain controllers
enterprise read-only domain controllers
cloneable domain controllers
dnsadmins
dnsupdateproxy
[...]

NOTE: If you are working with Windows Server 2016 the --group-info or --gid-info will not return any users from the group. More information about deprecated function in Windows 2016: https://blogs.technet.microsoft.com/activedirectoryua/2016/02/09/identity-management-for-unix-idmu-is-deprecated-in-windows-server/

In order to get the information about which users belong a specific group we can use the following command line.

wbinfo --group-info=direction-proxy
direction-proxy:x:10000:douglas.santos

As we can see the user douglas.santos belongs to direction-proxy, in other words this user has no restrictions in the internet access.

Now we can check which groups a specific user belongs with the following command line.

wbinfo --user-groups=douglas.santos
10001
10000

So we got two number each one becomes a group it is the GID of each group, but what the number 10000 means ? We need to check it with another command line.

Let's check the GID 10000

wbinfo --gid-info=10000
direction-proxy:x:10000:douglas.santos

Now we need to take a look what does mean the number 10001

wbinfo --gid-info=10001
domain users:x:10001:

As we can see above the 10001 is the domain users group, every user in the domain belongs to this group.

In some cases we do not remember all the command lines above so I created a script to help us

The script below is very simple, it is only to help to map the groups that a user belongs, fell free to improve it.

vim /etc/squid3/check_user.sh
#!/bin/bash

if [ ! -z $1 ]; then
USERNAME=$1

if [ -f groups.txt ]; then
  rm -rf groups.txt
fi

wbinfo --user-groups=$USERNAME >> groups.txt

  for END in $(cat groups.txt); do
    wbinfo --gid-info=$END
  done
else
  echo "Please use the script as: ./check_user.sh username"
fi

The script below is very simple, it is only to help to map the users that belongs to a group, fell free to improve it.

vim /etc/squid3/check_groups.sh
#!/bin/bash

if [ ! -z $1 ]; then

GROUP=$1

wbinfo --group-info=$GROUP 

else
  echo "Please use the script as: ./check_groups.sh group"
fi

As the Debian always starts the services right after its installation we need to stop the squid proxy service to do another things in order to reload and configure the new resources configured.

systemctl stop squid3

Now we need to create the directories to store the proxy cache.

squid3 -z

Now we can start the proxy server with the following command line.

systemctl start squid3

NOTE: DO NOT FORGET TO CREATE THE GROUPS AND ADD THE USERS INTO THE GROUPS BEFORE START TESTING THE INTERNET ACCESS

We can test the user and password that we got from AD with the following command line.

wbinfo -a username%correctpassword
plaintext password authentication succeeded #
challenge/response password authentication succeeded #

The command above was executed successfully, now let's take a look in another example.

Below we will take a look at a output with a wrong password given to the command line.

wbinfo -a username%wrongpassword
plaintext password authentication failed #
Could not authenticate user usuario%senhaerrada with plaintext password #
challenge/response password authentication failed #
error code was NT_STATUS_NO_LOGON_SERVERS (0xc000005e) #
error messsage was: No logon servers #
Could not authenticate user usuario with challenge/response #

In the squid configuration we have another kind of command to get the information about the user and password, so we need to run a test to make sure that everything is working properly.

ntlm_auth --help-protocol=squid-2.5-basic --domain=domain --username=username --password=correctpassword
NT_STATUS_OK: Success (0x0) #

The command line above is for ntlm_auth without using the kerberos ticket, so we need to given the command line the domain, username and password in order to get the privilege to access the proxy.

Let's run a test with the same command line above, but now we will not give the correct password, so we will get a error message as follow.

ntlm_auth --help-protocol=squid-2.5-basic --domain=domain --username=username --password=wrongpassword
NT_STATUS_IO_TIMEOUT: NT_STATUS_IO_TIMEOUT (0xc00000b5) #

Installing and Configuring the Lightsquid with Apache

We can get a preview about the lightsquid in http://lightsquid.sourceforge.net/demo18/index.cgi?year=2009&month=08

Now we need to update the repositories and upgrade all the packets to the latest version.

aptitude update && aptitude dist-upgrade -y

Now we need to install the dependences that we need to run the Lightsquid properly with Apache

aptitude install libgd-gd2-perl libbio-graphics-perl libapache2-mod-perl2 apache2 -y

Now we need to enable the modules to cgi in the apache

a2enmod perl cgid

Now let's get the sources of lightsquid and store them in the /var/www/html

cd /var/www/html/
wget -c http://wiki.douglasqsantos.com.br/Downloads/monitoring/lightsquid-1.8.tgz

Now we need to decompress the tarball

tar -xzvf lightsquid-1.8.tgz

Let's remove the tarball because we do not need it more

rm -rf lightsquid-1.8.tgz

I will rename the lightsquid-1.8 to only lightsquid

mv lightsquid-1.8 lightsquid

Now we need to change the files permission enabling the apache execute them

cd lightsquid
chmod +x *.cgi
chmod +x *.pl

Now we need to set up the permission to user and group of the files and directories to www-data.

chown -R www-data:www-data /var/www/html/lightsquid

Now we can make some changes in the lightsquid.cfg like as language and squid log path.

vim /var/www/html/lightsquid/lightsquid.cfg
[...]
$logpath             ="/var/log/squid3";
[...]
$lang                ="eng";

After change the configuration we need to run a test to make sure that everything is ok

perl  /var/www/html/lightsquid/check-setup.pl
LightSquid Config Checker, (c) 2005-9 Sergey Erokhin GNU GPL

LogPath   : /var/log/squid3
reportpath: /var/www/html/lightsquid/report
Lang      : /var/www/html/lightsquid/lang/eng
Template  : /var/www/html/lightsquid/tpl/base
Ip2Name   : /var/www/html/lightsquid/ip2name/ip2name.simple


all check passed, now try access to cgi part in browser

One important feature that I like in the lightsquid is that we can set up the group configuration that will appear in the web report, so instead of use only users to get information about the user we can get information about the groups.

Let's access the lightsquid directory

cd /var/www/html/lightsquid

Now let's create a copy of the groups.cfg.src to start working with it

cp group.cfg.src group.cfg

Here we need to use the following: user/ip address, id, group. The id needs to be unique for each group and the group does not need to exists in the AD, so you can create your own, or use the groups from AD

vim /var/www/html/lightsquid/group.cfg
douglas           01      direction
anderson.angelote 01      direction
hillary           02      warehouse
nerso             02      warehouse

If you have a lot of groups and users it may be very hard to update the group.cfg so you can create a file with your groups such as

vim groups.txt
almoxarifado-proxy
atendimento-proxy
baixa-proxy
bureau_d-proxy
bureau_o-proxy
cipa-proxy
compras-proxy
comprasonline-proxy
crm-proxy
design-proxy
dev-proxy
diretoria-proxy
dp-proxy
facebook-proxy
financeiro-proxy
gerencia-proxy
maps-google-proxy
marketing-proxy
rh-proxy
skype-proxy
transporte-proxy
youtube-proxy

Now we can use the following script to update the groups.cfg

vim ~/update-groups.cfg.sh
#!/bin/bash
      
        cp -Rfa /var/www/html/lightsquid/group.cfg{,.bkp}
        cat /dev/null > /var/www/html/lightsquid/group.cfg
  LGID=1
  for GRP in $(cat groups.txt); do
    if [ -f users-tmp.txt ]; then
      rm -rf users-tmp.txt
    fi
    wbinfo --group-info=$GRP | cut -d ':' -f 4 | tr ',' '\n' >> users-tmp.txt
    for END in $(cat users-tmp.txt); do
      echo "$END  $LGID  $GRP" >> /var/www/html/lightsquid/group.cfg
    done
    LGID=$(($LGID +1))
  done

Now we need to change the permission of the file

chmod +x update-groups.cfg.sh

Now we need to execute the file as any other shell script

./update-groups.cfg.sh

Now we need to remove a file before run the lightparser.pl

rm -rf /var/www/html/lightsquid/report/delete.me

Now we need to run the lightparser.pl again to reload the configuration

/var/www/html/lightsquid/lightparser.pl

Another feature that I like is we can use a map to link the user name/ip address with another identification such as the complete name of a user or machine.

vim /var/www/html/lightsquid/realname.cfg
douglas Douglas Quintiliano dos Santos
192.168.1.3    Nerso da Silva

Now we need to run the lightparser.pl again to reload the configuration

/var/www/html/lightsquid/lightparser.pl

Now we need to create a schedule to make sure that the ligthsquid will be update each 20 minutes, fell free to set up your own schedule.

crontab -e
[...]
*/20 * * * * /var/www/html/lightsquid/lightparser.pl today

Now we need to create the virtualhost that will store the configuration about the lightsquid web interface.

vim /etc/apache2/sites-available/lightsquid.conf
<VirtualHost *:80>
  ServerName lightsquid.douglasqsantos.com.br
  ServerAlias proxy.douglasqsantos.com.br
  ServerAdmin infra@douglasqsantos.com.br
  DocumentRoot "/var/www/lightsquid"

   <Directory "/var/www/lightsquid">
     DirectoryIndex index.cgi
     Options +ExecCGI -Indexes -MultiViews +SymLinksIfOwnerMatch
     AddHandler cgi-script .cgi
     AllowOverride All
     AuthUserFile /etc/apache2/access/lightsquid-htpasswd
     AuthName "LightSquid"
     AuthType Basic
     require valid-user
   </Directory>
   ServerSignature Off
   LogLevel info

  ErrorLog /var/log/apache2/lightsquid-error.log
  CustomLog /var/log/apache2/lightsquid-access.log combined
</VirtualHost>

As the ligthsquid does not require an authentication we need to create one to assure that only the right people can access it.

mkdir /etc/apache2/access/

Now we need to create the file that will control the users and password that will be required to use the web interface.

htpasswd -s -c /etc/apache2/access/lightsquid-htpasswd lightsquid
New password: 
Re-type new password: 
Adding password for user lightsquid

Now we need to disable the default virtualhost

a2dissite 000-default

Let's enable the configuration of lightsquid

a2ensite lightsquid.conf

Now we need to restart the Apache service

/etc/init.d/apache2 restart

Now we can use the web interface to get information about user access in http://lightsquid.douglasqsantos.com.br or http://ip_servidor

Installing and Configuring the Lightsquid with Nginx

We can get a preview about the lightsquid in http://lightsquid.sourceforge.net/demo18/index.cgi?year=2009&month=08

Now we need to update the repositories and upgrade all the packets to the latest version.

aptitude update && aptitude dist-upgrade -y

Now we need to install the dependences that we need to run the Lightsquid properly with Nginx

aptitude install libgd-perl libbio-graphics-perl fcgiwrap nginx apache2-utils -y

Now let's get the sources of lightsquid and store them in the /var/www/html

cd /var/www/html/
wget -c http://wiki.douglasqsantos.com.br/Downloads/monitoring/lightsquid-1.8.tgz

Now we need to decompress the tarball

tar -xzvf lightsquid-1.8.tgz

Let's remove the tarball because we do not need it more

rm -rf lightsquid-1.8.tgz

I will rename the lightsquid-1.8 to only lightsquid

mv lightsquid-1.8 lightsquid

Now we need to change the files permission enabling the apache execute them

cd lightsquid
chmod +x *.cgi
chmod +x *.pl

Now we need to set up the permission to user and group of the files and directories to www-data.

chown -R www-data:www-data /var/www/html/lightsquid

Now we can make some changes in the lightsquid.cfg like as language and squid log path.

vim /var/www/html/lightsquid/lightsquid.cfg
[...]
$logpath             ="/var/log/squid3";
[...]
$lang                ="eng";

After change the configuration we need to run a test to make sure that everything is ok

perl  /var/www/html/lightsquid/check-setup.pl
LightSquid Config Checker, (c) 2005-9 Sergey Erokhin GNU GPL

LogPath   : /var/log/squid3
reportpath: /var/www/html/lightsquid/report
Lang      : /var/www/html/lightsquid/lang/eng
Template  : /var/www/html/lightsquid/tpl/base
Ip2Name   : /var/www/html/lightsquid/ip2name/ip2name.simple


all check passed, now try access to cgi part in browser

One important feature that I like in the lightsquid is that we can set up the group configuration that will appear in the web report, so instead of use only users to get information about the user we can get information about the groups.

Let's access the lightsquid directory

cd /var/www/html/lightsquid

Now let's create a copy of the groups.cfg.src to start working with it

cp group.cfg.src group.cfg

Here we need to use the following: user/ip address, id, group. The id needs to be unique for each group and the group does not need to exists in the AD, so you can create your own, or use the groups from AD

vim /var/www/html/lightsquid/group.cfg
douglas           01      direction
anderson.angelote 01      direction
hillary           02      warehouse
nerso             02      warehouse

If you have a lot of groups and users it may be very hard to update the group.cfg so you can create a file with your groups such as

vim groups.txt
almoxarifado-proxy
atendimento-proxy
baixa-proxy
bureau_d-proxy
bureau_o-proxy
cipa-proxy
compras-proxy
comprasonline-proxy
crm-proxy
design-proxy
dev-proxy
diretoria-proxy
dp-proxy
facebook-proxy
financeiro-proxy
gerencia-proxy
maps-google-proxy
marketing-proxy
rh-proxy
skype-proxy
transporte-proxy
youtube-proxy

Now we can use the following script to update the groups.cfg

vim ~/update-groups.cfg.sh
#!/bin/bash
      
        cp -Rfa /var/www/html/lightsquid/group.cfg{,.bkp}
        cat /dev/null > /var/www/html/lightsquid/group.cfg
  LGID=1
  for GRP in $(cat groups.txt); do
    if [ -f users-tmp.txt ]; then
      rm -rf users-tmp.txt
    fi
    wbinfo --group-info=$GRP | cut -d ':' -f 4 | tr ',' '\n' >> users-tmp.txt
    for END in $(cat users-tmp.txt); do
      echo "$END  $LGID  $GRP" >> /var/www/html/lightsquid/group.cfg
    done
    LGID=$(($LGID +1))
  done

Now we need to change the permission of the file

chmod +x update-groups.cfg.sh

Now we need to execute the file as any other shell script

./update-groups.cfg.sh

Now we need to remove a file before run the lightparser.pl

rm -rf /var/www/html/lightsquid/report/delete.me

Now we need to run the lightparser.pl again to reload the configuration

/var/www/html/lightsquid/lightparser.pl

Another feature that I like is we can use a map to link the user name/ip address with another identification such as the complete name of a user or machine.

vim /var/www/html/lightsquid/realname.cfg
douglas Douglas Quintiliano dos Santos
192.168.1.3    Nerso da Silva

Now we need to run the lightparser.pl again to reload the configuration

/var/www/html/lightsquid/lightparser.pl

Now we need to create a schedule to make sure that the ligthsquid will be update each 20 minutes, fell free to set up your own schedule.

crontab -e
[...]
*/20 * * * * /var/www/html/lightsquid/lightparser.pl today

Now we need to create the virtualhost that will store the configuration about the lightsquid web interface.

vim /etc/nginx/sites-available/lightsquid.douglasqsantos.com.br
#/etc/nginx/sites-available/lightsquid.douglasqsantos.com.br
## Sets configuration for a virtual server.
server {
      ## Sets the address and port for IP, or the path for a UNIX-domain socket on which the server will accept requests.
      listen 80;
       
      ## Sets names of a virtual server
      server_name lightsquid.douglasqsantos.com.br;
       
      ## Enables or disables emitting nginx version in error messages and in the “Server” response header field. 
      server_tokens off;
 
      ## Logging 
      access_log /var/log/nginx/lightsquid.douglasqsantos.com.br-access.log combined;
      error_log /var/log/nginx/lightsquid.douglasqsantos.com.br-error.log;
       
      ## Sets the root directory for requests.
      root /var/www/html/lightsquid;
                 
      ## Sets configuration depending on a request URI.
      location / {
        ## Checks the existence of files in the specified order and uses the first found file for request processing
        try_files $uri $uri/ =404;
        auth_basic "Acesso Restrito";
        auth_basic_user_file /etc/nginx/.htpasswd-lightsquid;
      }
 
      ## Defines files that will be used as an index.
      index index.cgi index.htm index.html;
      
     ## Sets configuration for cgi files.
     location ~ \.(cgi|pl)$ {
        include fastcgi.conf;
        fastcgi_pass   unix:/var/run/fcgiwrap.socket;
    }
}

As the ligthsquid does not require an authentication we need to create one to assure that only the right people can access it.

Now we need to create the file that will control the users and password that will be required to use the web interface.

htpasswd -c /etc/nginx/.htpasswd-lightsquid lightsquid
New password: 
Re-type new password: 
Adding password for user lightsquid

Now we need to disable the default virtualhost

unlink /etc/nginx/sites-enabled/default

Let's enable the configuration of lightsquid

ln -s /etc/nginx/sites-available/lightsquid.douglasqsantos.com.br /etc/nginx/sites-enabled/lightsquid.douglasqsantos.com.br

Now we need to restart the Apache service

systemctl restart nginx

Now we can use the web interface to get information about user access in http://lightsquid.douglasqsantos.com.br or http://ip_servidor

Referências