Saturday, August 4, 2018

OpenVPN import in Ubuntu

Oh the joy of Linux and importing VPN configurations from pfSense OpenVPN client export wizard/generator.

On Ubuntu 16.04 LTS, the trick was not to use the OpenVPN.oval file, but to manually import the file not using the OpenVPN part of the GUI- using the 'archive' option from the pfSense file generator.

I updated to Ubuntu 18.04 LTS and now the trick was to download the Mac/Windows viscosity bundle, unpack, and run that .conf.

Oh Linux network manager and your VPN 'turn key' GUI, you trickster you.

Wednesday, July 4, 2018

Increased sensitivity to A3Sec's Splunk pfsense app's scanned panel

So playing around with A3Sec's pfsense Splunk app, I moved the threshold of the scanned panel around to see how many more IPs/geolocations popped up on the map.  I then even had a friend run a standard/default nmap scan on my IP, and cranked the search string's sensitivity down to where it caught it, this is the new string.

index=gw_pfsense sourcetype=pfsense_filterlog action=blocked  | dedup src dest_port | transaction src maxspan=5m maxpause=12s keeporphans=false | where eventcount > 3 | iplocation src | geostats latfield=lat longfield=lon count(src)

Saturday, June 2, 2018

Shinobi NVR

Being a glutton for punishment, I didn't want the standalone NVR unit built for my PoE security cameras- I was head strong that I would buy BlueIris, or get Zoneminder working.

I started to get sticker shock at BlueIris, its $60.00 for the server software, and that doesn't even come with the $10.00 Android app!  The second gripe was that it only runs on Windows, and I've found Win10 to be really bad for uptime- I have had to firewall my Win10 VM (currently used for the PoE client from the company that is very limited in features) off from the internet so it doesn't randomly update and restart (with my luck this would be right during a brake-in).

A quick VM spin up of a zoneminder build showed me I wanted NOTHING to do with that software.  Getting so discouraged I started to entertain the idea of buying the PoE brand's NVR,  then I came along Shinobi- the new kid on the block but solid looking.  So onto the pro-tips earned by hours of fail (not Shinobi's fail, my fails).

Install from site, great instructions, well packaged, one of the easier Linux installs
https://shinobi.video/docs/start

Reddit support forum:
https://www.reddit.com/r/ShinobiCCTV/

- do not build on CentOS, trust me.
- Currently on Ubuntu server 16.xx
- If a VM, make a snap shot after updating the OS and installing Shinobi, not just as best practice but there are a number of "land mind" settings in Shinobi that if set but not right for your PoE camera, Shinobi will become totally un-responsive.  Its easiest to just start over from the snap shot.

Adding cameras:  Reolink RLC-410 Bullet, RLC-420 dome, RLC-422 5mp zoom dome.
Some info on Reolink from Zoneminder trail blazers:  https://forums.zoneminder.com/viewtopic.php?t=25874

Reolink:  h265 camera facts.
https://reolink.com/h265-ip-cameras-buying-guide/

- rtsp://user:password@IPaddress:554/h264Preview_01_main

Stream types for reolink:
- Poseidon is lagging
- HLS  *AVOID* causes system to be un-usable and requires reverting to snapshot
- FLV doesn't work
- JPEG with audio is decent

Notes from the Zoneminder attempt:
- Reolink has two feeds per each camera, one is high frame rate, one much slower (and I believe lower resolution).  For the OEM client I believe the slower FR is for motion sensing.

Current result- all four of my Reolinks are feeding in, but there is severe frame skipping, or just lock up of the feed all together.

Thursday, April 26, 2018

Auditing FreeNAS CIF/SMB activity

Having setup Splunk to ingest pfSense router/firewall/snort IDS/IPS logs and my windows laptop logs, I have a lot of goals, so little skills:
- build upon already great splunk apps for the above logs
- One of those apps is for FreeNAS, and I want to also catch smb share activity
- Ingest Foxhound Raspi BRO logs

For FreeNAS, first off before even tackling the Splunk setup, I began searching how to setup auditing on FreeNAS to even create the SMB activity logs, and found this thread:

https://forums.freenas.org/index.php?threads/tutorial-add-full-logging-on-samba-shares-full_audit-freenas-9-3.13840/

Basically adding in the SMB edit part of the GUI:

full_audit:prefix = %u|%I|%m|%S
full_audit:failure = connect
full_audit:success = mkdir rename unlink rmdir pwrite
full_audit:facility = LOCAL5
full_audit:priority = NOTICE

A bit noisy, another config is:


full_audit:prefix = %u|%I|%m|%S
full_audit:failure = connect
full_audit:success = mkdir rename unlink rmdir
full_audit:facility = LOCAL5
full_audit:priority = NOTICE

More stuff about CIF/SMB logging:
http://a32.me/2009/10/samba-audit-trail/

I can't help but feel its outdated for v11 though.  I did the first part of the tutorial (to wait to see results before then worrying about log retention) and the results were not there.  Then when simply using the GUI to go to services > logging "normal", then I started to see user activity in var/log/samba4/log.smbd--- I'm not sure if both the tutorial and my last bit both have to be done, or if with v11 all one needs to do is set logging to "normal".

* note, freebsd freenas requires `clog -f` instead of `tail -f`, but in FreeNAS `tail -f` is used...

Never the less, the security auditor nerd in me rejoiced when seeing detailed activity of videos and pdf docs accessed from a linux laptop user.

Next is to get the Splunk FreeNAS TA working, and adding to it's dashboard panels that highlight smb file activity.

--- 4.17.2020 the sega continues ---

Getting data into Splunk:

There is a two-prong approach here- Splunk has an app for FreeNAS but I believe it uses a REST API to gain data, and the data is about the FreeNAS itself such as drive use, temps etc.

Having those logs are nice, but I want the SMB logs as well.  The issue in the earlier post about filling up internal drive space with log data might become a non-issue if the data gets spirited away and FreeNAS overwrites effectively (without having to be told to do so, we will see if some crash on audit fail happens or the data just dries up).

You can setup a syslog output from FreeNAS.  System > General > at the bottom will be a Syslog Server line.  I put my splunk host IP and went with port 9083.  Respectively, put this as a data input in Splunk and ensure the firewall allows 9083 traffice.

Success:


So next is to tell FreeNAS to grab those SMB logs and spirit them away via syslog as well, ontop of the generic general default syslog data (that maybe we want to blacklist later?)

This person's thread is a good start:

https://www.ixsystems.com/community/threads/samba-audit-logs-to-centralise-log-servers.79133/

Read how he/she fixed in a few posts later, and try to implement this.

```destination m_samba_audit { file("/mnt/ie/logs/smb/smb.log"); };
log { source(src); filter(f_local5); destination(m_samba_audit); flags(final); $
destination loghost { udp("192.168.3.42" port(5514) localport(514)); };
log { source(src); filter(f_info); destination(loghost); };```
But the mistake was putting "flags(final);"     Remove that part, and tailor the rest of the lines to match my environment =

"destination m_samba_audit { file("/var/log/samba4/log.smbd"); };
log { source(src); filter(f_local5); destination(m_samba_audit);


Syslog.conf if found in etc/

Looking at syslog.conf (copy below):

# $FreeBSD$
#
#       Spaces ARE valid field separators in this file. However,
#       other *nix-like systems still insist on using tabs as field
#       separators. If you are sharing this file between systems, you
#       may want to use only tabs as field separators here.
#       Consult the syslog.conf(5) manpage.
*.err;kern.warning;auth.notice;mail.crit                /dev/console
*.notice;authpriv.none;kern.debug;lpr.info;mail.crit;news.err   /var/log/messages
security.*                                      /var/log/security
auth.info;authpriv.info                         /var/log/auth.log
mail.info                                       /var/log/maillog
lpr.info                                        /var/log/lpd-errs
ftp.info                                        /var/log/xferlog
cron.*                                          /var/log/cron
!-devd
*.=debug                                        /var/log/debug.log
*.emerg                                         *
# uncomment this to log all writes to /dev/console to /var/log/console.log
# touch /var/log/console.log and chmod it to mode 600 before it will work
#console.info                                   /var/log/console.log
# uncomment this to enable logging of all log messages to /var/log/all.log
# touch /var/log/all.log and chmod it to mode 600 before it will work
#*.*                                             /var/log/all.log
# uncomment this to enable logging to a remote loghost named loghost
#*.*                                            @loghost
# uncomment these if you're running inn
# news.crit                                     /var/log/news/news.crit
# news.err                                      /var/log/news/news.err
# news.notice                                   /var/log/news/news.notice
# Uncomment this if you wish to see messages produced by devd
# !devd
# *.>=notice                                    /var/log/devd.log
!ppp
*.*                                             /var/log/ppp.log
!*
include                                         /etc/syslog.d
include                                         /usr/local/etc/syslog.d


Noticed some commented out lines that look like when uncommented out, allow all logs to go over syslog.

---- Later ----

I noted making changes to etc/syslog.conf (or the cp of the file to syslog.conf.backup) didn't stick.  Turns out FreeNAS has the "real" riles somewhere else, the changes I make are in RAM I guess, it will not be persistent

The path to do so (and really break stuff) is /conf/base/etc
in there will be syslog-ng.conf

Also look up beadm.  Its a way to create a new boot environment.  Or in this case backup the current boot environment before editing stuff.

I input the following:


Now, time to edit /conf/base/etc/local/syslog-ng.conf to look at /var/log/samba4/log.smbd to syslog output.  (backup syslog.ng with cp syslog.ng syslog.ng.backup)

Taken from a forum post, need to change paths and names to match my environment:
"destination m_samba_audit { file("/mnt/ie/logs/smb/smb.log"); };
log { source(src); filter(f_local5); destination(m_samba_audit);

Mine should be more like:

"destination m_samba_audit { file("/var/log/samba4/log.smbd"); };
log { source(src); filter(f_local5); destination(m_samba_audit);

The difference in path is I believe that other poster has the samba logs going into a smb share foler first- then to be take from there to go to syslog. This is probably best practice for storage space and data retention concerns.

Buuuut...


Files in this path are unwritable even to root.  Is there a chmod to fix this?  Or a mount command?

The fix seems a bit heavy handed but the advice given was to upgrade from 11.2-U7 to 11.3-U2.  After the upgrade, the paths above will be writeable.



Manual here (11.7?  I'm on 11.2, is FreeNAS even on 11.7?)  is pretty interesting- has a section on enabling syslog-ng

https://www.freebsd.org/doc/en_US.ISO8859-1/books/handbook/configtuning-syslog.html

When you enable syslog in the WebUI, it appears it just enables ```syslog_ng_ebable="YES"``` and below that, ```nginx_enable="YES"```



This explains that the only logs into Splunk so far have nginx inside the raw data.

In theory, I need to edit rc.conf found in /conf/base/etc to have syslog-ng enabled, the stanza is
```syslogd_ebable="YES"```.  It is currently at default ```syslogd_enable="NO"```

Then edit the syslog-ng.conf in /conf/base/etc/local to have the direction and log stanzas calling out the samba log.


After the mods just like Linux, ```service syslogd restart```

Also, link on how to send test syslog traffic to check connectivity:
https://www.freebsd.org/cgi/man.cgi?query=logger&sektion=1&manpath=freebsd-release-ports

Default syslog-ng.conf for reference.  What my noob brain takes away is that even though destinations are not commented out, I am not seeing logs from those destinations when syslog is enabled because further down their respective 'log' lines of the path are commented out (though the log specifications below are not, I assume without context as to what destination they apply to it gets skipped over).clea

root@freenas:/conf/base/etc/local # more syslog-ng.conf
@version:3.19
@include "scl.conf"

#
# This sample configuration file is essentially equilivent to the stock
# FreeBSD /etc/syslog.conf file.
#
# $FreeBSD: head/sysutils/syslog-ng/files/syslog-ng.conf.sample 340872 2014-01-24 00:14:07Z mat $
#

#
# options
#
options { chain_hostnames(off); flush_lines(0); threaded(yes); };

#
# sources
#
source src { system();
             udp(); internal(); };

#
# destinations
#
destination messages { file("/var/log/messages"); };
destination security { file("/var/log/security"); };
destination authlog { file("/var/log/auth.log"); };
destination maillog { file("/var/log/maillog"); };
destination lpd-errs { file("/var/log/lpd-errs"); };
destination xferlog { file("/var/log/xferlog"); };
destination cron { file("/var/log/cron"); };
destination debuglog { file("/var/log/debug.log"); };
destination consolelog { file("/var/log/console.log"); };
destination all { file("/var/log/all.log"); };
destination newscrit { file("/var/log/news/news.crit"); };
destination newserr { file("/var/log/news/news.err"); };
destination newsnotice { file("/var/log/news/news.notice"); };
destination slip { file("/var/log/slip.log"); };
destination ppp { file("/var/log/ppp.log"); };
destination console { file("/dev/console"); };
destination allusers { usertty("*"); };
#destination loghost { udp("loghost" port(514)); };

#
# log facility filters
#
filter f_auth { facility(auth); };
filter f_authpriv { facility(authpriv); };
filter f_not_authpriv { not facility(authpriv); };
#filter f_console { facility(console); };
filter f_cron { facility(cron); };
filter f_daemon { facility(daemon); };
filter f_ftp { facility(ftp); };
filter f_kern { facility(kern); };
filter f_lpr { facility(lpr); };
filter f_mail { facility(mail); };
filter f_news { facility(news); };
filter f_security { facility(security); };
filter f_user { facility(user); };
filter f_uucp { facility(uucp); };
filter f_local0 { facility(local0); };
filter f_local1 { facility(local1); };
filter f_local2 { facility(local2); };
filter f_local3 { facility(local3); };
filter f_local4 { facility(local4); };
filter f_local5 { facility(local5); };
filter f_local6 { facility(local6); };
filter f_local7 { facility(local7); };

#
# log level filters
#
filter f_emerg { level(emerg); };
filter f_alert { level(alert..emerg); };
filter f_crit { level(crit..emerg); };
filter f_err { level(err..emerg); };
filter f_warning { level(warning..emerg); };
filter f_notice { level(notice..emerg); };
filter f_info { level(info..emerg); };
filter f_debug { level(debug..emerg); };
filter f_is_debug { level(debug); };

#
# program filters
#
filter f_ppp { program("ppp"); };
filter f_slip { program("startslip"); };

#
# *.err;kern.warning;auth.notice;mail.crit              /dev/console
#
log { source(src); filter(f_err); destination(console); };
log { source(src); filter(f_kern); filter(f_warning); destination(console); };
log { source(src); filter(f_auth); filter(f_notice); destination(console); };
log { source(src); filter(f_mail); filter(f_crit); destination(console); };

#
# *.notice;authpriv.none;kern.debug;lpr.info;mail.crit;news.err /var/log/messages
#
log { source(src); filter(f_notice); filter(f_not_authpriv); destination(messages); };
log { source(src); filter(f_kern); filter(f_debug); destination(messages); };
log { source(src); filter(f_lpr); filter(f_info); destination(messages); };
log { source(src); filter(f_mail); filter(f_crit); destination(messages); };
log { source(src); filter(f_news); filter(f_err); destination(messages); };

#
# security.*                                            /var/log/security
#
log { source(src); filter(f_security); destination(security); };

#
# auth.info;authpriv.info                               /var/log/auth.log
log { source(src); filter(f_auth); filter(f_info); destination(authlog); };
log { source(src); filter(f_authpriv); filter(f_info); destination(authlog); };

#
# mail.info                                             /var/log/maillog
#
log { source(src); filter(f_mail); filter(f_info); destination(maillog); };

#
# lpr.info                                              /var/log/lpd-errs
#
log { source(src); filter(f_lpr); filter(f_info); destination(lpd-errs); };

#
# ftp.info                                              /var/log/xferlog
#
log { source(src); filter(f_ftp); filter(f_info); destination(xferlog); };

#
# cron.*                                                /var/log/cron
#
log { source(src); filter(f_cron); destination(cron); };

#
# *.=debug                                              /var/log/debug.log
#
log { source(src); filter(f_is_debug); destination(debuglog); };

#
# *.emerg                                               *
#
log { source(src); filter(f_emerg); destination(allusers); };

#
# uncomment this to log all writes to /dev/console to /var/log/console.log
# console.info                                          /var/log/console.log
#
#log { source(src); filter(f_console); filter(f_info); destination(consolelog); };

#
# uncomment this to enable logging of all log messages to /var/log/all.log
# touch /var/log/all.log and chmod it to mode 600 before it will work
# *.*                                                   /var/log/all.log
#
#log { source(src); destination(all); };

#
# uncomment this to enable logging to a remote loghost named loghost
# *.*                                                   @loghost
#
#log { source(src); destination(loghost); };

#
# uncomment these if you're running inn
# news.crit                                             /var/log/news/news.crit
# news.err                                              /var/log/news/news.err
# news.notice                                           /var/log/news/news.notice
#
#log { source(src); filter(f_news); filter(f_crit); destination(newscrit); };
#log { source(src); filter(f_news); filter(f_err); destination(newserr); };
#log { source(src); filter(f_news); filter(f_notice); destination(newsnotice); };

#
# !startslip
# *.*                                                   /var/log/slip.log
#
log { source(src); filter(f_slip); destination(slip); };

#
# !ppp
# *.*                                                   /var/log/ppp.log
#
log { source(src); filter(f_ppp); destination(ppp); };


Wednesday, April 4, 2018

BRO on a Raspberry Pi + Splunk

So the onset of this project was spurred by a few goals-
- I have a wordpress site that might as well be a honeypot because wordpress on a DMZ offering services to the interwebs is, well, pretty much asking for it.
- I had a Raspberry Pi 3B just sitting there- it had a few roles before but I just couldn't find a new one after giving up on Home Assistant
- I want to increase my auditing and analysis fu

One could go about installing BRO from source, and a great guide (though meant for Ubuntu) is, of course, found on digital ocean:
https://www.digitalocean.com/community/tutorials/how-to-install-bro-on-ubuntu-16-04
And I might go down this path later to have more control and understanding of the setup.

But to saver yourself a little headache, and to add additional features, sneakymonk3y on github made the foxhound-nsm build.  It did not install correctly, but I reached out and gebhard73 had a fork that did work (except for critical stack who seemed to drop ARM support).

Check out sneakymonk3y's blog post on the build (pay attention to the critical stack account, hopefully critical stack supports ARM again as it looks awesome):
https://www.sneakymonkey.net/2016/10/30/raspberrypi-nsm/

One day I hope to be able to make a build like that.

So step one:  foxhound build done.

Step two:  Mirror/Span port to Pi from router of DMZ interface

My home router is a pfSense box (thank you pfSense and netgate crew for everything!)- using a generic appliance box from Amazon, I highly recommend pfSense to learn routing, firewall ACLs, Snort and other network fundamentals.  After googling how to bridge the DMZ to an available interface as a mirror port, packets were streaming into the Foxhound BRO pi.

Step three:  Look at Bro logs.  Going to var/log/bro/current shows that Bro was indeed getting information in!  But how good of a grep'er or regex'er are you?  I wanted pretty SIEM stuff, so now to-

Step four: installing a splunk ARM universal forwarder.  You will need an account with Splunk, and using wget with username and password then the URL of the "download" button will just download the html of the page.  I had to download the tar to my workstation, then scp the file to the pi (after making a tmp folder in the foxhound directory).

https://www.raspberrypi.org/documentation/remote-access/ssh/scp.md

The support for the Splunk UF for the raspberry pi is great at the dev level, the forward is the latest 7.0.3 build which was great to see, but the supporting documentation is next to nothing.  Following the Linux tar install instructions:

Install from a tar file

  1. Expand the tar file into an appropriate directory using the tar command. The default installation location is splunk in the current working directory.
    tar xvzf splunkforwarder-<…>-Linux-x86_64.tgz
    
  2. To install into /opt/splunkforwarder, run:
    tar xvzf splunkforwarder-<…>-Linux-x86_64.tgz -C /opt
Then received the, Couldn't determine $SPLUNK_HOME, perhaps it should be set in environment"  when typing ./splunk start in the opt/splunk/bin path, so this thread was a help:

 https://answers.splunk.com/answers/553373/couldnt-determine-splunk-home-perhaps-it-should-be.html

Next, my Splunk server is using a free license, so I have to manually configure the forwarder's outputs.conf, and manually install any add-on to give the forwarder the functionality an app on the server requires (such as data input from the bro logs, ingesting and conducting source typing and field extractions for the app's dashboard).






Go backs: I would like to make an Ubuntu VM BRO build simply to get critical stack working as well, but my ESXi machine would need another NIC that it does not have, and its such an old box the BIOS doesn't have enough memory to support a PCI NIC I have... maybe there is some neat networking trick to get the pfSense mirror port output through the LAN NIC of the hypervisor and into a BRO VM.

Friday, March 23, 2018

Adding Snort for Splunk app

Log into Snort, go to add app, search snort- add Snort for Splunk

Follow this instructions:

App Installation

1.) To install the app, download the app to a suitable download location.
2.) Open Splunk and click on the Manage Apps icon.
3.) Click on the Install app from file button.
4.) In the Upload app window, select the Browse button under File and locate the SnortforSplunk.spl file in the download location in step 1.
5.) Click the Upload button to install the app.
6.) Once the app is installed follow the next steps to setup the Data Input.
7.) Under Splunk -> Settings -> Data Inputs -> Local Inputs -> UDP -> Click the New button.
8.) In the Port field under Add Data -> Select Source, enter 514 for the port to be used.
9.) In the Only accept connection from field under Add Data -> Select Source, enter the IP address of the pfSense appliance
(in the format XXX.XXX.XXX.XXX)and click Next.
10.) From the Source Type dropdown under Add Data -> Input Settings, select Network and Security -> snort.
11.) From the App Context dropdown under Add Data -> Input Settings, select Snort for Splunk.
12.) Click the Review button.
13.) Once satisfied with the settings, click the Submit button.

pfSense Setup

1.) The setup assumes that pfSense version 2.3.2-RELEASE-p1 is being used as a firewall, along with pfSense-pkg-snort version 3.2.9.2_16 (which includes Barnyard2 version 1.13 and Snort version 2.9.8.3) and that this has been properly setup.
2.) Select Services -> Snort from the main menu and this will show the Snort Interfaces page.
3.) Select the Edit option (Pencil icon) under the Actions column on the page adjacent to the interface to be captured.
4.) Under the submenu, select the {Interface} Barnyard2 (substitute {interface} for either WAN or LAN or as has been setup on pfSense).
5.) Under General Barnyard2 Settings, make sure the following are checked:-
- Enable Barnyard2
- Show Year
- Archive Unified2 Logs
and leave the rest of these settings on their default values.
6.) Scroll down to Syslog Output Settings and select Enable Syslog
7.) Under Remote Host enter the IP address of the Splunk server that is receiving the log files from Barnyard2.
8.) Under Remote Port enter the port of the Splunk server that is receiving the log files from Barnyard2 (default is port 514). **
9.) Change Log Facility from default to LOG_AUTH.
10.) Change Log Priority from default to LOG_ALERT.
11.) With all the settings done click on the Save button at the bottom.
12.) Click on the Snort Interfaces menu item and under the Snort Status column, click on the icon to start/restart the Snort interface.
13.) Check on the Splunk server that the information logged by Barnyard2 is captured by the app.
** Exception, I already had PFsense syslogs going into UDP 514, so made another data input in splunk for port 992/UDP 
*** added port 992/UDP to public interface firewall on CentOS 7 server hosting Splunk indexer.
**** Specified a new index created, index = snort, used for snort app
* Is PFSsense blocking barnyard logs?