Please enable JavaScript.
Coggle requires JavaScript to display documents.
SIEM- ELK, Cases Caught - Coggle Diagram
SIEM- ELK
Important
Difference between a good analyst & a great analyst is a great analyst is a great analyst tries to understand his or her organization & uses information to find & thwart the bad guys. By establishing this knowledge, an analyst can then turn around & use it as a weapon. ex: fuzzy search in ELK
This core NW service which tends to be very noisy - How we can consume them, augment them , enrich them & make them better for our detection. Use these logs to catch abnormalities, unauthorized activity , C2 , Bea-coning.
-
TACTICAL Detection with core services
DNS, HTTP/HTTPS & some degree SMTP are the key critical data sources. In fact you were to ask me if i only have one data source then that would be- answer is DNS.
very beneficial, again its going to be high volume so we need to figure out how to handle that from retention standpoint & how to use it
Common service logs can be augmented & made defensible in terms of active defense.
Difference between good meal & great meal can be the addition of a single ingredient
People & Processes are key !! you cant do detects that you know of ,you are not aware of that there is gap in training
Core NW services
You core NW services requires in-order to function correctly
Can you access internet without DNS, can you send email without SMTP ?
Most productive & highly SIEM as possible. so when you pull DNS logs - that's forensics , Incident handling, mass detection- it tells a story if you are chasing down the alerts you need this to able to correlate & provide context to whats happening. so we need to cllect & tune it accordingly so we can handle it without drowning. How to detect logs after filtering, weapnize the SIEM to have awesomely detection capabilities
Attacks that we are going to be demonstrating & catching them are real. Based in my life experience & based on my research . and many organizations wh have been compromised. and we are going to map that how we can better detect that vs what actually happened in real life
Heart to Heart discussion here:
Threat feeds are highly advertise- highly claimed to be awesome. In my opinion and talking to lot of folks the value which threat feed provide is not much as you think. They are not the first thing you should be looking at if you buying the SIEm today . have key DS analysis first. its tend to be blacklist & dont qet quickly updated
it get you down the rabbit hole. instead there are other qucik wins tecnique
-
We have been told from pentesters constantly that it is hard to be defender as pentetsers, adversary , hackers only need to find one vul & exploit it to own your env. basically one unpath system & they win- this is false , it dosnt mean if they compromise one sys & its gameover . they dont have keys to kingdom yet they still have to pivot lateral , still have to move around to end goal. but what really is true we as a defender we only need one detect to catch & evict out the nw to win
SMTP
Whaling attacked: an CFO has been mimicked to try tp create pressure on xfer of funds. seems to be your boss , language is seems to be your boss .Also cousindomain attack labmeinc vs 1abmeinc , display name matched the one in org vs the adversary - spoofing or illegitimate domain used being applied
If you bring SMTP logs then you can look for bad attachments for malware , malicious domain, internal domains being used from external source. better technologies are antispam,inline filtering
-
Email Monitoring:
How can SEIM detect what other mature devices do not?
Inbound monitoring possibilities
- Fuzzy Search
Monitor for bursts of email from outside source
Look for external use of key employee name
phishing attack, click through malware ,spam
Outside:
C2 activity
your system are phished and turnaround to phish other org
Fuzzy Phishing
Technical Fuzzy match - looking for domains similar to our domain but not an exact match - Kibana ~ feature
Look for mass influx of events
someone using open source tool, harvester, reconNG, maltego to find emailid of employees to mass phish the org
Hierarchical Spear Phishing
Spear phishing impersonate executive staff to presure a response ex: fake CFO
possible to use a list of names to look for phising attaempts
look for display names of key staff & tag, alert, or trigger action when it is send from outside with display name
Outbound SMTP
Things for look for:
Source IP addresses of the internal system sending email
SMTP mail user agents (MUA)
Outbound SMTP Clipping level
max emails sent per sys, per hour?
what if that was to increase exponentially?
Can be legitimate increase ?
Could be an operational issue
SMTP C2
phishing attacks using an auth sys
-
DNS
DNS is a awesome log source with tremendous detection capabilities - cuz, so many things rely on DNS & there is lot of actionable detects we can use.
Adversary must use it- we must use it - the way it is use is different though
DNS is like a trxn logs of who is going where- it tells a story & forensics.
Things we can find - who is going to the phishing site , whos going to C2. all sort of detection capabilities are found within DNS
DNS types:
A - Name to IPv4
AAAA - Name to IPv6
CNAME - Alias to A record
MX - Maps domain to email server
NS - Name server reciord
PTR - IP to Name
TXT - Sends txt over DNS (special purposes such as email auth,DKIM,orsender approval,SPF)
When we start to analyze the malware / adversary activity basically they abuse the way DNS is use and a way it goes to all of our preventative control . Yet some of the bypass technique is extremely easy for us to catch if we are coll DNS logs
Value add fields
Frequency Score
Parent domain (google.com)
Subdomain (www)
Domain lengths
Domain Age
Geo-Info
Tagging
Filtering Records
Internal domains are used constantly & in high volume- can compose 80% of DNS logs.
Ignore specific external domains
DNS Sinkhole
Sends req to bad domains to 0.0.0.0 or an IP of your choice
Provides an opportunity to add a value-add tag . Tag can then be used to alert or use in dashboards
Compromised from Phishing attempt
System phones home to 6kfOyJXFGCPjjui.comm- changing ip every minute & data leaving over DNS
It looks random because adversary use randomness to evade antivirus & prevention controls
We need to use tools that identifies randomness
we are looking for things which are outside our natural; language. tools such as "freq_server.py" & "freq.py"
English dictionary - if first is q then second is u - that's what follows a q.
freq.py is using natural language processing to identify a string which is not according to what we expect to see. would have low freq score meaning not natural
freq.py can pass with cli against the random string of DNS & measured with dns English table.
ex: freq.py -m 6kfOyJXFGCPjjui dns.freq
3.5866738
lower the score higher the change of randomness. def scale is 0 to 35
this is a cli tool - you cant integrate with SIEM as it generates to many logs & it would be invoking it too much to work
freq.py can be used in script or manually
Dump unique domains from previous day into dns.txt & use -b switch next day as this is not real time
freq.py -b dns.txt dns.freq
Freq_server.py rest style API interface that SIEM can call over to. log stash filter res plugin can be used to call over it
Newly observed domain
if anyone reached out to a domain "xyz.com" and no one have ever visited this.
at the end of day generate list of newly observed domain & then review at the next day
Monitoring newly-observed domain req can be a powerful way to detect malware
Use ElastAlert to write new term rule to monitor new domains that were never seen.
do it once a day. if you going to be attack[cousin/random domain] it is going to come from new domain
15-30 min check by one person a day
Baby Domains
Look at when a domain was registered - whois can be used to pull domains creation date. Even 30 days of registration is abnormal. Studys have shown that malware routinely will setup a domain & phish from it & less than 24hrs it will rotate & not use the domain to new one
domain_stats.py checks for registration date of this domain. also have ability to query alexa top 1 million
Age discrimination dashboard
event_type:dns AND creation_date: [ now -3M to now]
anything register in last 3 mnths - why i am going to these sites.
you will find here - phishing , malware domains , domain generation algorithm, fast flux - multiple detects
We have been told from pentesters constantly that it is hard to be defender as pentetsers, adversary , hackers only need to find one vul & exploit it to own your env. basically one unpatch system & they win- this is false , it dosnt mean if they compromise one sys & its gameover . they dont have keys to kingdom yet they still have to pivot lateral , still have to move around to end goal. but what really is true we as a defender we only need one detect to catch & evict out the nw to win
even with DNS logs we are showing multiple ways in catching attacks & we are still on one Datasource
DNS Augmentation for the WIN - Multiple things we can catch
C2 phone home to new domain - CAUGHT by monitoring newly observed domains
C2 phone home using random name - We can catch things like random names - generated domain, one time random - we can catch that
C2 phone home using young domain- caught -lot of malware, worms spread using newly young registered domain
Applicable to phishing domain from initial scenario:
Newly accessed domain requests - Caught- if we have an attack going on
Young domain request - caught
NXDomain
When a DNS query fails, an NXDOMAIN record is created:
occurs normally when"
User has a typo when accessing the site
Mis configured application is running
Google chrome looking for DNS hijacking
An infected device using domain generating algo
DNS Enumeration
Do file.xyz.com, web.xyz.com etc exists?
Some attacks involve asset discovery using DNS- uses DNS brute forcing tools such as DNSRecon or Nmaps dns-brute script
Will generate lot of NXDomain request
if internally done you will see these
if external, you will only see these is you host the authoritative DNS servers for your domain
Fast Flux
Uses a single domain & quickly rotates DNS A record
Often changes record within minutes
Prevents blacklisting IPs & hides backend servers
Connections quickly rotate amongst infected hosts - diagram section 2 5 029
Also check double fast flux
NXDomain Detect
Monitoring NXDomain responses by IP will detect:
DGA use
DNS Recon
Misconfigured sys
Section 2 5 33
DNS Tunneling
Favorite attack from redteam pen-test assessment, very common for pentester to come & demo this
one attack technique that i have used to control system & steal data & never been caught when doing pentest
Attack: lets say i have system infected & i have control & now i am pointing at external facing server on port 53 & now i can exfiltrate the data coz you are not locking outbound system & you are only allowing external DNS which you trust.
you should be allowing only authorized DNS
Advanced DNS Tunneling
I standup an Evil DNS i can control on internet & the i connect to domain that i own that is registered to external DNS server & i tunnel to it.this works even if you are limiting what outbound server you control. DNS is recursive & proxied by default .
it is hard to prevent but easy to detect - you wil have massive dns logs coming from desktop. more of txt record
DNS Tunneling detection
Monitor abnormal req such as:
limit which external DNS servers can be used
limit access to these authorized internal DNS servers
A large number of req from single IP addre
Use of special DNS query types such as TXT records
Monitor NXDomain records
Monitoring for :
TXT queries ; # of queries diagram 2 5 37
Internet Access
All traffic acessing the internet should have a DNS entry
Why not monitor all unique IP add accessing the Internet without using DNS?
Sometimes evil sometimes not - worth investigating
Exceptions will include:
Microsoft IP add
CDN IPAdd
Root DNS serv IP addre
Vendor stuf
IP Detection
Effectively dump a list of DNS req & answers & compare that to all the outbound connections made to external IPs. you then compare the list & diff them & thats the report you look at once a day. look for direct IP comm to internet
Catching Internet-Based Malware
it is easy to setup new domains access monitoring
+
it is easy to find direct IP calls to internet
Alert against 99.99% Internet-based network attacks
You can catch 99% IBA - if you are monitoring newly observed domains, minted registered domains & you are monitoring direct IP - that means all new comm you are monitoring . if i standup new domain & attack it from its on newly observed domain- if i try to attack you from direct IP then i am on different report . they are gonna end up on of those list
DNS detects for the win:
Machine was pawned from a phishing attack - DNS based detects were used for the win.
Phone home using fast-flux DNS - Caught
Botnet over DGA - NA
Basic DNS tunneling - CAUGHT
Advanced DNS Tunneling - NA
Frequency analysis of domain name - Caught
HTTP
HTTP Direction matters
Inbound HTTP - Designated access to internal webserver
Expected Use- Web Server
Unexpected Uses:
Brute force logins
SQL Injection
OS commands
XSS attacks
Exfiltration (large download
Outbound HTTP- Designated access to external web server
Expected use- Web client
Unexpected uses:-
Command & Control
Remote access trojan
DDOS
Stage2 downloads
Data exfiltration
Most common protocol use today
Almost everything uses webApp- camera,printer,chromebook devices builtin-
Will http server log to find different lvls of unauthorized activity
HTTP Log Source
Inbound - traditional web service
Web Application FW
Apache , IIS, Nginx
IDS
-
Log source includes
Zeek/suricata - both inbound/outbound http
PacketBeat- logs both inbound & outbound HTTP
Scripts - Does anything it is told do
Cloud Logs/API
Binaries/Application
WebServer Log formats
Combined log
Def for apache & Nginx (both can flip to json)
NCSA common- Text file with basic HTTP fields
W3C extended
Default for IIS
Apache/Nginx Logging
Generates 2 logs: access.log , error.log
accesslog is for default logging, Error log is used to record errors.
Both allow for granular log formatting
Supports logging with JSON, changing the format of fields
IIS Logging
W3C - default logging
Supports NCSA, IIS
Outbound traditional approach - SQUID Proxy
Every org have WebProxy that they are explicitly going through. Most org don't use explicit proxy they use transparent proxy. Malwares are not aware of proxy- if you browse the site & got infected - malware may not be proxy aware - meaning when it reaches backout it doenst know how to get out to internet as it is not configured to use explicit proxy. blocking stage 2 downloads it is preventive /detective control
SQUID Proxy Logs
Logs are very similar to webserver. Few fields are extra one - was it receive from cache, whats the mime type
XFF [ X-Forwarded-For]
Proxy terminates a client connection & starts a new one
Means source IP address looks like proxy. XFF keeps track of original IP add- Section 2 6 14
Unauthorized proxies
If the XFF field is present, a proxy is being used- This can be used to find unauth proxies
Network Observations
Run something like a bro - mirror traffic to it & now you are seeing traffic inbound & outbound http. So as long as bro can see the traffic going across the NW we have inbound/out logs
HTTP def field:
Source IP; source port
Dest IP;l Dest port
VHost signifies the site we are connecting to
Status code - what happens when you tries to reach the page. URI - wether the proxy was involved
HTTP- ValueAdd Fields - Enrichment
Geo_iP , we can do frequency analysis.
Different things we can add on here. We can do preety much all things which we do on DNS. No need to duplicate if we are catching the attacks through DNS vs HTTP . We will focus on diff types of detection.
We are still going to do things like freq analysis, field name checking.
Freq score of VHost/ServerName
Field Lengths (URi, User-agent, VHost]. Tags
HTTP Scenario
You receive call from CIO:
"Dr.Harts machine is acting slow, and he is cranky.."
"also, one of the web server crashed last night"
Which issue is the highest priority?
behind the scenes:
Good doc got himself phished & is now a bot
And WebServ crashed due to unauthorized vul scan. Now data is being exfiltrated
HTTP Components
HTTP have many moving parts
Methods- The action a client wishes to perform- different method get invoked: get , head ,post,put, delete - volume changes under attacks.
Status Code- A Webserver response code & the volume of its affects things
VHost- The name of a site such as google.com
Certain conditions shows up that make a "normal" field look "abnormal"
Security Practitioners get into mentality of what fields do you have & what they look like in normal conditions vs abnormal[ caution- you must play with moving parts] vs. everyone else [Danger- moving parts]
Methods
Methods indicate the actions HTTP wants to perform- some common methods are:
GET- if i reach out to webpage - i am typically doing get request - page to be pulled down so that i can view it- typically used to retrieve the data
POST- If i am filling something like a form , i am doing a POST request. These are the two most common methods.
HEAD- used to retrieve HTTP headers only
PUT - Used to create or update a specified resource
DELETE - Used to delete a specified resource
Normal HTTP Method use
GET Method:
reached out to msn.com
nearly have 90 get req in <1 sec & then stops- ads on the page gets load up those get GET req as well when u reach
downloading ubuntu ISO large file
2GEt req in < 1 sec the initial request & then the download & then stops
POST Method:
if i fill up a form like a contact us page thats one post & you are done
Adware malicious coupon toolbar thing
~posting 80 times in 30 Seconds , spike then stops
Abnormal HTTP Method use
GET method:
Meterpreter C2 open source agent easy to use over HTTP.If a compromise an asset & control it with meterpreter using http request - then i would see aprox evry 3 seconds by def http request all day -its an C2 with low beacon of every 3 sec- its a get request
if i do sQl injection attack - i would expect to see 228 get attempts in 1 min 2 sec - very rapidly testing param on website seeing if it can bypass & access the DB bypass auth with rapid pace
If i do a scan against a webserver - traditional webcrawl - i would see 1000 of get request again in short period
on POST Method:
cutwail botnet c2 - instead of using get req you will see post . you may see 600 post in less than 70seconds very rapid pace specially in process of data exfiltration. SQL injection in 228 time period - its doing same type of atempts on get with post to see webserver respond differently
Methods/ Clippling lvls
GET per minute
if i see more than 400+ get req within min it is is red.if it is lower than that between 100-300 its yellow/orange
samrhing we can do for POST with lower value as it is less likely [per 5 min per source]. Red tends to be true alert & yello tends to be false positive . make dashbaords red & green if its too much ignore yellows/orange
Status code
1xx informational
2xx Success ok if i downloaded the page successfully
3xx Redirection- if i hot http & server redirects me to https
4xx Client error
5xx Server error
404 Not found- Active detection technique
404 is similar to DNSs NXDomain record: occurs when a req is made for something that does not exist.but it is desnt occur naturally in high volume
Can occur due to typos but when monitored will discover:
Webcrawlers [ generates in bulk]
VA scanner
misconfigured website - generates lot of 404 msgs
for detetcive purposes - 1 404 equals bad
robots.txt file says certain crawlers should not index or crawl certain pass or files on webserver. - we say hey google dont go to /admin leave that alone
Malicious tools/authorized crawlers honor robot.txt for diff purpose they look to see if they want to go there - we can create robot entry which points to non existence file n folder & if you got 404 against that that log means somone is doing something they shoudnt
it is something we can pivot off just another clipping level . works measure very well
200 ok
Too much off good thing is really bad thing . If someone is crawling or scanning a site, there will be lots of unique URIs accessed with a status code of 200
blackhills tool on this- section 2 6 29
Host header
host header is what identifies which site to load , ex you access google.com, the host header of google,com is used to load that site.
what if, instead, it was 198.8.93.14?
host header is the IP addres- by def if host header is not set its kind of catch all anything drop here - but it is donesnt have to be that way - you could actually say if there is no valid DNS based host header - page doesnt get accessed at all - it is recommendation to configure
we need to look for odd host headers specifically IP - naked ip req when missing DNS name. Malware routinely do this in scan or malware - so look for that have a host header that has an ip address
Naked IP
using an IP in Vhost is refered as naked IP - not normal from inside the NW to external
Malware using DNS is less likely to be caught yet naked IPs are common for malware.
Goal is to alert or have an empty dashboard for the use of naked IPs fro internal to external
URL Lengths
web page visits often fall within a short length.
but under attacks , such as SQL injection , these URLs lengths get very large - longest is 23& within failed SQL injection attack is 607
Active detect is to have length threhold set visit 2 6 34
User agents
user agents are high value add with little work
used to identify client connecting to a web server section 2 6 35
HTTP review /Summary - its an awesome datasource
We can use statuses, methods, naked ip request
Detection of Vulnerability Web Scan
Found by monitoring status code, found my monitoring method
SQL Injection
Found by monitoring URL lengths & possible status codes
PC acting as a bot
Found by monitoring methods or user agents
Methods - such as bulk requests
status codes -404 & 202 monitoring as well as long tail analysis
HTTPS
Certificate analysis are possible if we have someway to observer the certificate over the wire - bro is the fantastic way of doing
Pentester was caught:
They used meterpreter , so we would find
Missing fields Detected
found freq analysis:
High Entropy Detected
Self-Signed cert Detected
lack of .com - multiple detect & we only need one of them to work
Unusual common Detect
SIEM
Lec1
Logs & deeds they have something in common
Insan k achae /bure aaamal kya hai - these are all logs
image if we are able to see good/bad deeds on a daily basis - so we will make good decisins ke yaar meri bad deeds bad rahi hai - take mei ache aamal ko badaa saku- take jab mera decisions liya jaye to wo acha liya jaye. ek pura view milta hai it prof ko SIEM se
-