incident response Since 1.5 year playing on the defensive side within Kudelski Security Security Architect in the Technology Team “Full stack security architect : from assembly to Gartner reports and beyond”
complete Many types of attacks many ways to detect them manually or automatically through logs IDS are another way, have a look at https://speakerdeck.com/milkmix/clusis-campus-2015-introduction-to- suricata-ids Windows logs https://speakerdeck.com/milkmix/import-module-incidentresponse
servers from Switzerland IP could be matched against GeoIP database to retrieve country of origin Example of use-case detect fraudulent connections coming from a different country match source IP against known malicious hosts
is not common even on AWS you might have a VPN as an enterprise or a single IP to connect from generate your own GeoIP.dat for internal addressing: https://github.com/mteodoro/mmutils
bytes sent exploiting SQL injection should generate a bigger request/response exploiting a blind SQL injection requires more requests/responses Example of use-case detect SQL injection exploitation by detecting higher bytes-sent value detect blind SQL injection exploitation using queries frequency
allow to upload documents Bob fears the following two vulnerabilities: 1. unrestricted upload of file with dangerous type 2. improper control of filename for include/require statement in PHP Apache logs are located in /var/apache2/access.log
scripts if an attacker exploits the two vulnerabilities to upload a remote shell his accesses will be there Example of use-case using the URI, detect PHP script which was not requested in the last 30 days (or shorter if you are in agile mode)
he is willing to try the 2015 way “I should have a look at those search database everyone has been talking about…” Elasticsearch for example ! Wait… how do I ship logs to this Elasticsearch thing?
not really used in that particular order it includes: a log collector with enrichment capabilities : Logstash a search database based on Lucene : Elasticsearch an interface to keep management happy : Kibana
required verify that all servers are time synchronised and/or timezone correctly set which fields should be kept? what information should be added to the events? … Events parsing is one of the pain-point when doing logs management/SIEM same applies for AlienVault, Splunk, …
it, it can yes use Apache Kafka nodes to collect logs forward from them to logstash to enrich/forward And for Windows events? use NXLog to ship events from Windows hosts
in a json file and have it indexed in the background ES is creating the schema for you ! in order to have correct and faster results in search mode, a correct mapping is required default one might not be optimal or functional for you ex: hosts name with . which is also a separator for default indexer
of document’s fields type can be string, integer, IP, date, boolean, binary, array, geopoint, … format is for date fields index is defined to analysed by default, other value is not_analyzed
a mapping cannot be changed for an index need to re-index all of it yep, this could be quite bad if you just discovered it after indexing 1TB you can use aliases on indexes to create new mapping faster think about your use-cases and perform tests gradually
rules written in yaml files generate alerts to several providers conventionals : email, Jira or for the more hipsters of you : Slack, HipChat, PagerDuty
indexes no rules on term values could be circumvented using filters but not all features will work But elastalert is designed to be extensible new rule types can be developed
of the network dedicated VLAN + firewall setup a load-balancing node and make it the only network-reachable endpoint also applicable to Hadoop and the like, …
SSL tunnels between nodes need a PKI (but who doesn't in 2015? ;)) wrap Elasticsearch in stunnel or similar solution Easier network segmentation so inter-nodes communications are not accessible Kibana/querying host behind a jump host access through SSH tunnelling
writers and not only consumers for ES ? set-up a reverse proxy with SSL connections only Nginx for example ssl on; ssl_certificate /etc/ssl/cacert.pem; ssl_certificate_key /etc/ssl/privkey.pem;
/ auth_basic_user_file options in the configuration file do not forget to also add transport security for the credentials security Kibana and ElastAlert are compatible
really simple to set-up many points to cover probably why elastic.co released a product to circumvent this Shield please note the references to Marvel comics :)
to have a subscription based license this is highly recommended as soon as you step out of the POC garden expertise on ES could save you quite some time
index Linux servers accessed through ssh : write in sshd index Kibana : read both indexes (and the one for itself) ElastAlert : read both indexes, write in elastalert_status
log management : OK events correlation : not automated Need some external development and administration compared to COTS solutions Or choose the “buy way” instead of the “make-way”
like the following logs, context, pcap, … storage : HDFS some use-cases : Elasticsearch some others: Cassandra and others: Neo4J Add some machine learning and shake hard… ;)
project state your current security maturity level list your assets, associated risks, threat models, … think about your use-cases ex: work with results from pentests list external sources that should be accessible from the SIEM ex: threat intelligence feeds