Page 23 of 40

The Myths of the Top 10 Myths of PCI-DSS Part Two

pci-compliance

Continuing where we left off yesterday, let’s jump right into the next Myth

Myth 6 – PCI requires us to hire a Qualified Security Assessor

Technically true. Once again for merchant level 3 and below, SAQs are good enough to be compliant. Here’s how it works: merchants complete an SAQ, the management signs it off and they pass the Attestation of Compliance (AoC) over to whoever is asking – generally either the acquiring bank, or the payment gateway. Some of these SAQs are easy. Which SAQ you choose is a little bit more work. While we are not going into SAQ in this article, a quick comparison of SAQ A (mainly for Ecommerce merchants that outsource all processing functions) and SAQ D-MER (generally for merchants who store, process and transmit card data): 14 questions for SAQ A vs 326 questions for SAQ D-MER. That’s right. It’s 23X more work.

So while this Myth is generally true, for a merchant to undergo SAQ D-MER, most do not have the capacity to do it themselves, hence require expertise from either QSAs or consultants outside of the company. What about this Internal Security Auditor (ISA) option?

Here’s where it gets a little strange. In 2012 Mastercard released a statement stating:

“Effective 30 June 2012, Level 1 merchants that choose to conduct an annual onsite assessment using an internal auditor must ensure that primary internal auditor staff engaged in validating PCI DSS compliance attend PCI SSC ISA Training and pass the associated accreditation program annually in order to continue to use internal auditors.”

And

“Effective 30 June 2012, Level 2 merchants that choose to complete an annual self-assessment questionnaire must ensure that staff engaged in the self-assessment attend PCI SSC ISA Training and pass the associated accreditation program annually in order to continue the option of self-assessment for compliance validation. Alternatively, Level 2 merchants may, at their own discretion, complete an annual onsite assessment conducted by a PCI SSC approved Qualified Security Assessor (QSA) rather than complete an annual self-assessment questionnaire.”

What they effectively are saying is that Level 1 to 4 merchants CAN have an option not to engage a QSA, but the caveat is that for level 1 and 2, they need to be ‘validated’ by internal auditors. Not just any internal auditors, but auditors certified as “ISA”, by the PCI council. Yes, it’s a certification that is created to sign off SAQs.

If you do not have an ISA, you are stuck, and you will need a QSA to validate your SAQ. In most cases, having a QSA validate is as much work as having them certify the environment, so you do end up ‘hiring’ a QSA to validate it.

Why not all join in the ISA bandwagon then?

Well, you need to cough out around USD500 for the PCI Fundamentals course, then around USD3,000 – USD4,000 for the ISA course and then every year USD1,000 for requalification training fee. Only companies going for PCI-DSS can have ISA so if you are consultants like us, you are out of luck.

Large merchants probably might want to invest in an ISA. But note of caution, ISA is NON transferable. So if you are an ISA for Company A, and you move to Company B, your ISA status does not go with you. If Company B wants you to be their ISA, you need to go through the entire course again. Yes, even the fundamentals course again.

It is certainly less expensive to get an ISA to validate your SAQ compared to having an external QSA, so large merchants might opt to have one or two ISAs in their stable and invest in them yearly.

Myth 7 – We don’t take enough credit cards to be compliant

PCI likes to state, even if you take ONE credit card, you are supposed to be PCI certified/compliant. But honestly, unless that one credit card transaction is to buy a Bugati Veyron, the acquirer is likely not going to come knocking on your door to ask you to become PCI compliant. The theory is that everyone who deals with credit cards will happily agree to invest in time to go through the SAQ and 12 requirements. The reality is starkly different. Businesses have 600 different things to look into daily, and most business turn a blind eye to PCI as long as there is no burning platform or pressure from above. The card brands push the acquirers, the acquirers push the payment processors and gateways and large merchants, and the payment processors push their service providers. Somehere down the line, the little travel agency around the corner that collects credit card information, jots down the the PAN and CVV on a log book for recording purposes so they can book online flights in behalf of the customer, is overlooked. As long as there is no massive exercise to push everyone to be PCI compliant, there will be organisations that continue to operate outside the PCI requirements. Yes, your CVV will still be kept in a log book by that little travel agency – still oblivious to why storing CVV is such a big deal.

Myth 8 – We completed a SAQ so we’re compliant

Well – technically, you are. Again “being compliant” is not really an end state itself. How can anyone sustain compliance 100%? When Target was breached, they were just re-certified as compliant. Hence, the word compliant is generally just used as punchline for businesses. For instance – Ecommerce starts online payment system. They register with acquirer, acquirer tells them to be ‘PCI Compliant’. They finish their SAQ and submit. Acquirer is happy with the signoff and allows them to connect. Ecommerce proudly displays “PCI Compliant” Logo (which is not allowed, by the way) prominently on their website. They have actually successfully completed an SAQ and they are ‘compliant’ because the acquirer tells them that they are. If they are not compliant, they wouldn’t be able to connect. By the fact that this is allowed, shows that Myth 8 is actually true!

Myth 9 – PCI makes us store cardholder data

It’s true that PCI would rather you NOT store cardholder data. But this myth doesn’t make any sense. It’s not because of PCI that businesses shape their business processes after. It is because of the business processes, that there is a need for PCI. So, it’s up to the business to store, transmit or process cardholder data or not. Nobody goes into PCI-DSS saying, oh, because of PCI-DSS we now need to store data and need to invest in HSMs and key management, encryption etc. Because of PCI, we now need to have a payment business. I have never seen such a client. It’s always the other way round. Based on your business, PCI might or might not apply.

Myth 10 – PCI is too hard

This is the same argument as Myth 5. The PCI SSC makes a good point by saying, it’s good practice regardless to have controls in place, aside from PCI-DSS compliance. But the myth is here because they are actually stating PCI is not hard, simply because you should be practicing good security in the first place. To many, good security is hard! Turnover of staffs, zero day attacks, business as usual priorities, advancement of technologies, software and hardware being obsolete, pressure from management, costing issues, new vulnerabilities and exploits discovered (and not discovered yet) – and the fact that in the cybercrime world, the bad guys are miles ahead of the good guys – security is hard, make no mistake about it.

So there you have it. You would think with a post like this, PCI-DSS is a fruitless endeavor. Far from it. It’s an excellent repository of security practices that all organisations should consider. While some of the standards in there show their age (Anti virus, anyone? Please.), overall, it’s one of the more direct, implementable standards we have experienced (compared to the labyrinth we know as the ISO27001). The point of the post is to clarify that sometimes, standards in practice can turn out quite different from standards in documentation.

Now – should you check if your CVV is stored by your travel agency?

The Myths of the Top 10 Myths of PCI-DSS

pci-compliance

A while back, the PCI council published a good article called the Ten Common Myths of PCI-DSS, to basically debunk a few conclusions people (or so they think) might have on PCI-DSS.

After wading deep into this standard for the past 6 years, I am taking a look again at these myths and I am like, Wait a minute, this isn’t exactly correct.

Myth 1 – One vendor and product will make us compliant

This myth is hard to beat. It’s obvious that one vendor and one product doesn’t make anybody compliant since PCI-DSS is so much more than a product or an implementation. It’s the practice of security within an organisation itself. But wait. Not all PCI projects are created equal, and here’s where we call ‘scope reduction’ comes into play. It is possible that a product can significantly reduce scope so much so that it’s almost easy to become compliant. For instance, tokenization. This is a solution created to remove the need of dealing and storing actual card data in a merchant environment. Instead a token is used and is mapped in the token vault provided by a service provider. Hence, the merchant does not have the key or means to decrypt. Of course, they still handle the first time card data is transmitted through, but it removes the need of the merchant to completely fill the dreaded SAQ D-MER as they no longer need to store card data. Or P2PE for instance. When it started, the solution was to provide a point to point encryption so that merchants need not have the means to manage the keys or decrypt the data. Of course, P2PE bombed and they had to revise the standard to make it more realistic.

Myth 2 – Outsourcing card processing makes us compliant

Again – if you outsource card processing, it might not immediately make you compliant but it sure as heck make it a lot easier to be compliant! With the new revisions of SAQ, we have the nicely flavoured SAQ-A and SAQ A-EP for ecommerce merchants to deal with, to avoid the death knell of SAQ D-MER. There are like 9 flavours of SAQ (self assessment questionaire if you are wondering), and merchants might differ in their journey of PCI depending on their business. Outsourcing to a PCI compliant card processor or payment gateway is a great way to reduce your scope. So while this myth is correct, outsourcing is still a great strategy if payment processing is not your core business.

Myth 3 – PCI compliance is an IT project

Everyone will nod their head in the board room and steering com and agree to this one sagely. But from experience, I will tell you, whether it’s business or IT project – IT guys will be significantly involved in it. If you think you can breeze through this sucker the way you championed through ISO27001, you are in for a little surprise. A large part of the 12 requirements deal with technical requirements, from firewall configuration to antivirus to logical access controls. Logging and key management are significant challenges we find, and an entire requirement 6 deals with patch management and secure coding practices. So, if you are not familiar with terms like OWASP, KEK, DEK, WSUS, TACACS/ACS/LDAP, XSS, CSRF,CVE and all these, it’s time to get cracking. Having done both ISO27001 and PCI, we can say the ISO is more of a best practice/guideline on security while PCI is a standard. It’s either you do or do not. There is no try.

Myth 4 – PCI will make us secure

I know what this myth is trying to say, but technically, if you are practicing PCI, you are a heck lot more secure than someone that’s not. Besides, being ‘secure’ isn’t a final state to be – it’s not possible, but rather it should be a constant practice of a hundred different activities to contribute to ‘being secure’. It’s like when people talk about ‘enlightenment’ or ‘world peace’ – it’s not actually achievable – and even if it is – it’s not sustainable. So yes, PCI will make you secure , relative to the company that has their server under a marketing director’s desk and the password “PASSWORD”.

Myth 5 – PCI is unreasonable; it requires too much

Obviously, PCI Council wants you to think this (again, they are saying these are myths, so whatever you read, PCI Council is asking you to think opposite). They are saying, PCI is reasonable, it doesn’t require much.

I disagree.

It requires a lot.

I mean, OK, if you say SAQ A or A-EP, fine, agreed, it’s a breeze.  But if you are talking about SAQ D or a full cert?

Unless you have unlimited resources, money, time or already practicing some Level 5 Maturity of security, then you need to really look into managing PCI. Most of our clients aren’t in that state, so when you talk to them about the amount of work needed? Oh boy.

Let’s say they have 40 devices in scope. Multiple applications, running on multiple servers. Several layers of firewalls. Firewall rules need to be clean. Sounds easier than it is. The amount of legacy rules we see in some clients would make you think that firewall has been around since the internet was invented. Server upgrades due to EOL. Network changes because the database is accessing the internet directly. Applications not patched. Devices not updated. Logging not centralised. No correlation of logs or event management. No incident management. No central management of passwords and users. Applications developed eons ago and developers have since left the company with the only document being a note saying, “Goodbye and thanks for all the fish!”. You get the drift.

Myth 5 and Myth 10 (PCI is too hard) is the same. When you put the word “TOO” in there, its brings in relativity. What is “TOO” to some companies? When you have a single administrator running the whole thing and he is dividing his time between this and a thousand other things, “TOO” might be the key phrase there. So, I won’t say it’s not possible for a full certification – it obviously is, since we have certified a number – but what we don’t want is our clients walking into PCI with expectations of breezing through and then get slammed like a deer in headlights. It will take effort. It will take resources. It will take money and it will take time. Yeah, time. Around 3 – 4 months if you are lucky for a full certifications. I often get a stunned look of disbelief and a general retort that goes like: “<insert invectives here>, I thought it would only take 2 – 4 weeks, man.”

Look – PCI is really useful and it’s not the intention to discourage people from going for it – but it would be great to be better informed so that expectations can be synced to reality. In the next article, we will cover the remaining myths of PCI.

Alienvault with NXLog Conclusion

AVNXLog

This is the final part of our foray into NXLog and Alienvault.

To recap: You have enabled Rsyslog over TCP, you have forwarded NXlog IIS FTP logs over to Alienvault, you have created an initial plugin to identify unsuccessful logins and done the CFG and SQL files and wrote into the ossim-db.

You need to do one more thing:

Alienvault-reconfig

The command reconfigures the whole system, to ensure that the plugin is properly used.

Once done, you can do the following:

Cat /var/log/alienvault/agent/agent.log | grep 10000

Now finally you might be able to see something like

Apr  4 11:30:18 VirtualUSMAllInOne ossim-agent: Alienvault-Agent[INFO]: Plugin[10000] Total lines [138] TotalEvents:[22]  EPS: [0.00] elapsed [10.01] seconds

Which means out of 138 lines in the log, Alienvault was able to parse 22 into events. Which makes sense since you only set up one policy – Unsuccessful Logins.

The best practice is simply to have a generic ‘Catch All’ policy at the end of the plugin which is extremely general and will catch everything else. With the methods in the previous article, hopefully you will be able to grasp this cryptic skill of plugins creation. Go ahead and try it.

If you head over to GUI, Go to Configuration->Threat Intelligence->Data Source->Data Source Groups.

Now create a new group and call it ‘EVERYTHINGISAWESOME’. Or something you like.

Now, lets add Data Source. Select the new data source you created, 10000.

Now head to Analysis->SIEM.

Filter Data Source Groups to the group you just created.

AWESOME! You can see UNSUCCESSFUL LOGIN and finally there is a source and destination IP and no longer 0.0.0.0

Go ahead and click on it and feel the warm, fuzzy feeling of success flow over you.

There’s one last thing you will need to do before we wrap this up in a ribbon.

Custom Directives.

You see, at the end, all the stuff you see in SIEM is useful only if you are eyeballing it. And in general, most people would rather eyeball the latest episode of the Walking Dead as opposed to eyeballing a SIEM. Heck, some people I know would rather eyeball a tulip growing than watching logs in a SIEM. That’s where Alarms come in. Alienvault has a very simple philosophy on alarms: if the Priority of the event, the Asset Value and the Reliability of that event is high enough, we are gonna throw out an Alarm. An Alarm is something that you should be somewhat…alarmed about.

So back to our unsuccessful login. In theory, if someone did one unsuccessful login, you would chalk it up to a typo. Two in a row, within 1 seconds, well, maybe he’s in a hurry. 5 in a row within 5 seconds, he’s going to look suspicious, but hey, he could be someone from senior management. Or the marketing director who thinks the DVD drive is a cupholder. 10 in a row within 10 seconds? OK, I’m just going to throw up an alarm and let the security guys know about this fellow.

How does it work?

Without going into too much intricacies of the Alienvault engine, it’s basically down to directives. And after wading through the swamp called custom plugins, you will find this a lot easier as it’s mostly done on the GUI.

First of all, obviously, make sure your custom plugin works.

Then, head to Configuration->Threat Intelligence and click on Directives.

You will see ‘User Contributed’ as empty. You can either create one from scratch or clone an existing one. I recommend the latter. Alienvault ships with around 2,000 directives in built and I would just feel bad to neglect the existing ones….I am sure the Alienvault engineers will be happier if you used theirs. They worked hard on these directives.

OK – for this case, I specifically want to know if someone is trying to bruteforce my extremely sensitive FTP server (I know, you are asking why I am using FTP and not SFTP, and my answer is: ILLUSTRATION.). So the first thing I do is, look, there is a convenient directive group called ‘Bruteforce Attack’. You already have over a hundred directives drawn out for you. Click on it.

Take your pick in any of the directives shown. For me, I chose

AV Bruteforce attack, FTP authentication attack against DST_IP 

Click on the “Clone” icon next to it.

You will find there’s one directive now under ‘User Contributed’.

You need to modify it somewhat.

AVDirective

A brief understanding will be useful here. This directive is basically saying:

a) If there is ONE occurrence of unsuccessful login from any IP to any destination that uses this data source, set the reliability to 4. Now, depending on your priority (usually it’s 3) and your asset value (usually it’s 2), this won’t throw an Alarm. The alarm works by

(Reliability X Priority X Asset Value) >= 25

So if you set your asset value as 5, then even one single wrong login will light up your Alarm panel.

b) So back to the rule – if there is 3 occurences of wrong login from the same source and same destination, we are going to raise the reliability up a little. But notice the timeout value – The timeout value defines how many seconds the plugin will wait to receive a response from the destination to which the request was sent. So if we don’t get a response within the timeout period the rule expires and the directive process defined in the rule is discarded.

c) Change the data source to the one that you created, in this case IIS_FTP and in the popup, select UNSUCCESSFUL LOGINS (in our case, the SID is 1). Everything else, we just took it from the original directive.

d) Restart the server. This doesn’t actually restart the whole Alienvault, just the correlation machine, so don’t worry that your SIEM is going to go down.

The final test is to just try to login into your FTP server with wrong passwords repeatedly.

Now head over to your Analysis->Alarms

You will see

AV Bruteforce attack, IIS FTP authentication attack against Host-192-168-0-35

Click on the details and scroll down and you will see the corresponding events and the correlation level of each event. You will see right at the bottom, after the first four attempts, an alarm was raised and the correlation level was raised. Then it goes through the next set of rules – after 10 attempts again, an alarm was raised and correlation level was raised.

Using policies (not covered here, else this article will be the size of War and Peace) – we can then decide what we want to do with this alarm – open a ticket, execute a script, send an email or trigger an SMS etc.

That’s about it. So we’ve covered getting NXLog all the way to Alienvault, Alienvault interpreting it through a plugin you created, and then using these new custom events to trigger an alarm using a directive you modified.

Welcome to the Alien Nation!

Alienvault with NXLog Part 3

AVNXLog

This is our third session on getting Alienvault to play with NXLog. In the first two articles, we’ve explored on enabling TCP for rsyslog on Alienvault (thoroughly optional), and to forward NXLog logs (in this case FTP) to Alienvault. The problem is AV is having a hard time digesting the log and using the stock syslog as a plugin, is spewing out useless events with the IP as 0.0.0.0. The raw logs are fine.

We’re going to take a brief look at Plugins. I am not going to dive deep into plugins, there are plenty of source material out there from Alienvault and from Google, but a brief look here will do.

Plugins is the magic that interprets logs. If you head over to

/etc/ossim/agent/plugins

You fill find tons of plugins already written for you out of the box. Problem is there is bound to be something that isn’t supported. Now to be clear there are many ways to skin a cat, if you are into cat-skinning.

a) Let AV do the work. This is the best way. This way allows you to get down to doing your work, and unless your job description is actually sitting down writing Alienvault plugins, I would suggest this method. You need to have a maintenance contract with AV though.

https://www.alienvault.com/knowledge-base/how-to-request-a-new-plugin

b) Using the smart event collector.

https://www.alienvault.com/knowledge-base/smart-event-collection-how-to

Yes, there is actually a plugin creator out of the box! Again, you will need some basic config and you would think this is a Godsend GUI to solve all your plugins problem. Except – I believe it’s still in development on this as I fed it a couple of times and it gave some functional results, but struggled to customise to the one I wanted. It’s probably a good starting point, else, I suggest to roll up your sleeve and impress your co-workers by working manually on plugins. (after trying out route A, of course).

To summarise plugins, they consist of two files:

a) The CFG File – this is where all the stuff you need to write occurs. The suggestion is to just copy a current cfg file in the plugins directory and make it your own, and change the plugin_id.

b) The SQL file – this is found in /usr/share/doc/ossim-mysql/contrib/plugins. This is where once the cfg file is ready, we actually write the events we want to capture into the ossim-db.

Starting with the cfg file for the above example, here is a snapshot

# Alienvault plugin - CUSTOM
# Author: Professor Xavier
# Plugin iis_ftp id:10000 version: 0.0.1
# Last modification: 2016-04-01 13:00
#
# Plugin Selection Info:
# Microsoft:IIS Internet Information Services FTP:-
#
# END-HEADER

[DEFAULT]
plugin_id=10000

[config]
type=detector
enable=yes
source=log
location=/var/log/alienvault/devices/192.168.0.35/192.168.0.35.log
create_file=false
process=
start=no
stop=no
startup=
shutdown=

[translation]


[001 - FTP IIS - UNSUCCESSFUL LOGIN ]
regexp=(?P<date>\w{3}\s+\d{1,2}\s(\d{4}\s)?\d\d:\d\d:\d\d)\s(?P<device>\S+)\s\S*\s\S*\s\S*\s(?P<src_ip>\d{1,3}\.\d{1,3}\.\d{1,3}\.\d{1,3})\s(?P<src_port>\S*)\s(?P<username>\S*)\s\S*\s\S*\s\S*\s(?P<dst_ip>\d{1,3}\.\d{1,3}\.\d{1,3}\.\d{1,3})\s(?P<dst_port>\S*)\s(PASS)\s(?P<password>\S*)\s(530).*
event_type=event
date={normalize_date($date)}
plugin_sid=1
device={$device}
src_ip={$src_ip}
src_port={$src_port}
username={$username}
dst_ip={$dst_ip}
dst_port={$dst_port}
userdata1={$password}

There are a few key components here. First, the header is a must. Second, this is a simplified example whereby I did not use any translation yet. What these ‘translations’ are, are simply a way to interpret events that look the same and change these so-called identifiers to numeric plugin_sid so you don’t end up writing extra Regex. It sounds important, but for a demo, let’s leave it empty for now and we will see that you actually don’t need it to run. It’s like a spoiler on a car. It makes your file looks good, but your car can run fine.

By the way Regex is regular expression, and its basically in python format.

The headers are necessary here, and after that are some default values that are self explanatory. The work starts at the bottom. You generally need to have

a) A Specific rule (that refers to a specific event – in this case UNSUCCESSFUL LOGINS)

b) A catchall rule (which I did not write here, due to time), but it’s generally where all the other events are caught, usually with the help of translation tables.

In this example, we have Unsuccessful Login.

The key is to take the actual unsuccessful login log from the raw log file itself, presented:

Apr  3 16:48:05 S006-SVR01 IIS[1] 2016-04-03 08:46:51 192.168.0.1 5490 - FTPSVC2 S006-SVR01 - 192.168.0.35 21 PASS *** 530 1326 41 25 17 0 6080e2d2-0565-4eb1-9eec-407683eac92a -#015

And head over to https://regex101.com/, and do some testing.

Make sure you select your ‘Flavor’ on the left menu as PYTHON, and then put in the raw log line under ‘TEST STRING’. Now you need to start doing the regular expression above, under “REGULAR EXPRESSION” (WOW!).

The thing about Regex is, it’s like programming. There is a way to do it that it works, and there is a way to do it that it works beautifully. I am the type of pseudo programmer whereby I would always forget to comment or to forget to do memory management on my C code in university. Instead of building functions and classes etc I would just create long lines of inefficient code just to get the work done and receive a barely passing mark and a long look of disapproval from my university lecturers. In this case, efficient code is key because if you are running tons of lines of logs through AV, the last thing you want your box to do is to wade through a pile of inefficient REGEX to understand it.

Thankfully, this article isn’t about efficient regex, so I suggest to really get cracking on it, or get Alienvault regex gurus to work on it. Instead, what you see above is a very simple regex just to show some demo that AV can actually grab logs, interpret it and put it into an event in the SIEM. The regex above is rife with /s and /S, basically just saying, there is a whitespace, or there is a non-whitespace (characters).

However, you will notice that we assign a few variables as well – namely, IP addresses, usernames , device, ports and of course date.

normalize_date is an inbuilt function to ensure that the date in the raw log is normalized to something that AV understands.

The plugin_sid is also important, as it assigns the event (within this plugin) a place in the database and allows AV to work on it and identify it. AV also allows 9 userdata slots to be used for other reasons for identification. So in this case, we want to match whatever we see in the logs that has ‘PASS’ and the errorcode ‘530’ in it and we would know this means that someone tried to login unsuccessfully.

Once the demo cfg is ready, you need to build your SQL File.

The SQL file, again, you can just take it from another SQL file in the SQL folder, and rename it appropriately. It’s a lot shorter and it might go something like this:

DELETE FROM plugin WHERE id = "10000";
DELETE FROM plugin_sid where plugin_id = "10000";
INSERT IGNORE INTO plugin (id, type, name, description) VALUES (10000, 1, 'IIS_FTP', 'IIS FTP');
INSERT IGNORE INTO plugin_sid (plugin_id, sid, category_id, class_id, name, priority, reliability) VALUES (10000, 1, NULL, NULL, 'UNSUCCESSFUL LOGIN', 4, 3);

That’s pretty much it. First, you delete the plugin and then insert a new one with the required descriptions and names and put in the plugin_sid (Event Type IDs), in this case “1”, which equates to UNSUCESSFUL LOGIN.

Run the command

cat iis_ftp.sql | ossim-db

to write the sql to the DB (of course using the actual SQL name you have).

And you are now ready to do some testing!

 

 

Alienvault with NXLog Part 2

AVNXLog

So, you have now figure out to enable Syslog over TCP on Alienvault. As it turns out, it’s optional in a sense you can get NXlog to run UDP. But it’s good to know.

Now in general, we don’t mess around with other systems other than Alienvault. But in reality, over so many POCs and deployment, its invariable that we will need to get our hands dirty and figure out with the client what on earth is happening and why don’t we see those dratted logs coming in. In the previous post, we’ve explored the useful tcpdump and the netstat – tulpen commands. But for this one, let’s look at the other side: NXlog and see how we set it up.

First of all, we obviously want to get something going in that windows box. Let’s say for instance, we set up FTP IIS there and we want to shovel all those logs over to AV. Now, I will assume FTP IIS is already up and logging is already done there, else we are going to be stuck talking about all sort of stuff that a million other articles have already explored.

We will jump into NXLog. Install NXLog (again, not going to explore that here), and once its up and running, we have set up the conf as follows

define ROOT C:\Program Files (x86)\nxlog
Moduledir %ROOT%\modules
CacheDir %ROOT%\data
Pidfile %ROOT%\data\nxlog.pid
SpoolDir %ROOT%\data
LogFile %ROOT%\data\nxlog.log
#Where your logs are
define IIS_LOGS C:\inetpub\logs\FTPSVC2
<Extension syslog>
 Module xm_syslog
</Extension>
#Define IIS Source
<Input IIS>
Module im_file
File "%IIS_LOGS%\u*"
SavePos TRUE
#Add in Syslog Fields
Exec $SourceName = 'IIS';
Exec $ProcessID = '1';
Exec $MessageID = '2';
#Don't send comments over
Exec if $raw_event =~ /^#/ drop();
#csv->parse_csv();
</Input>
#Define Output
<Output AV_Syslog>
Module om_udp
Host <Host of Alienvault>
Port 514
OutputType LineBased
Exec to_syslog_ietf();
</Output>
#Make something happen
<Route IIS>
Path IIS => AV_Syslog
</Route>

A few things are happening here, but basically you are getting NXLog to look into the specific folder for a file starting with u* and send it via UDP syslog to the Alienvault. Restart the NXLog and see if it works.

You should be seeing something on the tcpdump in Alienvault.

For starters, make sure your logfile is getting logs. Easiest way is to fire up an FTP client and just put or get into your FTP server and see if you see the logs ON THE SERVER first. If it’s there, then its a good start. You should be seeing some traffic hitting your Alienvault via the TCPDump.

Now for Alienvault, I know a lot of documentation out there teaches you to set up rsyslog manually and all, but with the new versions, they have simplified it. What we did was to go to the actual asset on the GUI: Assets and Groups -> Asset IP -> Details -> Plugins

Edit the Plugins and select Syslog

Most of us will go huh? Will this work?

No, it won’t work. Not yet anyway, but once this is done, you get two things:

Back in AV CLI, run

more /etc/ossim/agent/config.yml

The config.yml shows that for that particular device, it’s tied to which log file on the AV. Very useful especially if you have hundreds of logs coming in and you get confused.

/var/log/alienvault/devices

You should see a folder with the IP of the server you have NXLog running on and in that folder, a nice log file, which in turn will have the details of the logs that NXLog is sending over.

The problem now, is that AV doesn’t quite understand this log. Not yet. That’s where plugins come in play. We used the syslog plugin only as a placeholder to create the logfile. The syslog plugin won’t understand it.

Wait till you see a few lines coming into your Alienvault log file from the NXLog.

/var/log/alienvault/agent -> cat agent | grep 4007

What you have done here is to see if syslog plugin (Plugin id is 4007) can see anything coming in. It does, you might see some events like

Apr 2 11:27:25 VirtualUSMAllInOne ossim-agent: Alienvault-Agent[INFO]: Plugin[4007] Reading from /var/log/alienvault/devices/192.168.0.35/192.168.0.35.log
Apr 2 12:17:02 VirtualUSMAllInOne ossim-agent: Alienvault-Agent[INFO]: Plugin[4007] Total lines [134] TotalEvents:[134] EPS: [5.80] elapsed [10.01] seconds
Apr 2 12:17:12 VirtualUSMAllInOne ossim-agent: Alienvault-Agent[INFO]: Plugin[4007] Total lines [134] TotalEvents:[134] EPS: [0.00] elapsed [10.01] seconds

It basically means Alienvault processed x number of lines as ‘events’. Which you might think is great, but not so.

Back in the GUI SIEM, you can filter the datasource by ‘syslog’ and you will get a bunch of syslogs but with the destination and source host as 0.0.0.0. Basically this means, AV sees stuff coming, it uses syslog plugin to attempt to read the log but couldn’t figure out the IPs within that log due to its format and just sends out this cryptic line. Going into the event itself will give you a clue – the raw logs does state it is an FTP log coming from the FTP server, through NXLog. This means that Alienvault sees the logs, but using the syslog plugin, couldn’t interpret it properly due to its format.

So. You are half way there. The logs are coming in from a windows box through NXLog, and Alienvault sees it, but needs to speak the same language to understand it. Enter custom plugins.

Stay tuned!

 

« Older posts Newer posts »

© 2024 PKF AvantEdge

Up ↑