So, continuing the Real Myths of PCI-DSS, lets move down the list.
Real Myth 5: All PCI-DSS services must be outsourced
Now, this is a very important myth to clear up. Because it directly relates to the usually biggest concern of all: cost. A while ago, we provided an idea on how to cost PCI-DSS, and break it up into certification/advisory costing and implementation cost. While the certification-advisory cost is easier to gauge based on locations, processes, card storage, activities covered , implementation cost is harder to gauge. Because number one – you don’t know your scope yet. This means, you may have 10 or you may have 200 systems in scope, you don’t know. Some go, “Ah but we know, because we have already decided our scope!” and we go, “Ah, but that’s the Real Myth 7, that you can decide your own scope…read on, intrepid adventurer of PCI!”
In any case, one way to cap a cost or save cost is to in-source your work, i.e have your own people provide the implementation services. There are no “PCI-certified” company to actually do the implementation services. All services – except for ASV scans – can be performed by your own, if you are qualified enough to do it (more on that later). I’ll throw in some services that for a typical PCI project, is a must:- Penetration testing, Internal Vulnerability assessment, secure code review and code training, patching, logging and monitoring and daily review of logs, card data scan, application testing, systems hardening, segmentation penetration testing, encryption, key management etc. These are fairly typical activities you will find in PCI – and you can do it all on your own if you have the resources and knowledge to do it. So, don’t feel cornered by any firms or consultants stating that these services must be done by them in order to pass PCI-DSS!
Real Myth 6: All service providers MUST be certified to do implementation services
This is an extension of Real Myth 5. So once the company decides to outsource the PCI services, in the case where they do not have the resources to do it internally – they go about requiring “PCI qualified” service providers to do these services. We’ve seen this requirement before where the requirement was to be a “QIR – Qualified Integrator and Reseller” to do services like penetration testing and code reviews and such. QIR isn’t created for that. QIR is created for implementing merchant payment systems and has nothing to do with the services mentioned. Aside from that, there is a growing call for PCI services to be only performed by “Certified Penetration Testing Companies” with CREST or individuals with certifications like Certified Ethical Hacker etc. Now, while these are all well and good, and certainly mentioned even by the PCI-DSS as a guidance in selecting your vendors, these are by no means a requirement by the standard. Meaning, the QSA cannot enforce all your testing to be done by the above said certified entities if you have ready, qualified and experienced personnel on your end to do it. Again – this doesn’t mean any Tom, Dick and Harry, Joe and Sally can perform testing or activities in your environment. The above certs and qualifications obviously carry weight and we should not dismiss the fact that if an organisation takes the trouble to go through CREST, versus a company that was set up two days ago, and employ 2 testers working in Elbonia – which you should prefer or which one will the QSA has less of an issue of – that’s pretty obvious. What I am stating here is that, we’ve seen many veterans who are far more efficient or experienced in systems testing and security testing than we can ever hope to be and for whatever reason, they don’t bother much about these paper chase or certifications.
At the end, the QSA may raise a query on who carried out the test and may choose to check the credentials of the testers, but in most cases, if the testing seems to be in order, most QSAs are OK with it.
Real Myth 7: PCI scope and application of controls can be determined by the customer
This one is my favourite. Because it played out like an episode of a slapstick comedy. I was called one day by one of our clients who had a new group handling their PCI-DSS program. You see, we’ve been doing their program for four plus years and we’ve been servicing them fine for years – but the new group handling PCI now isn’t well versed with PCI. It’s frustrating because no matter how many “knowledge transfer” sessions we gave, we still ended up with the same questions. We realised we were stuck in a Groundhog Day scenario, where things never change no matter what we do. The group wasn’t technical, which was an obstacle but overall, I think maybe they just have too many things on their plate.
So on this call, they said they were going to compare our quote to other providers this time around and I said, yeah, it’s fine. They then proceeded to give me a scope to quote and I commented, “Hold on, this is the wrong scope. This is the list of assets two years back. You have now changed your scope, and there is a new list of assets under scope for PCI.”
From there, the proverbial excretion hit the fan. They maintained how did I know their scope? I said, well, we helped you guys work it out. Your operations team is aware of it, that every year we help you validate your scope (as per PCI-DSS guidance). And they went: “Why must the scope come from you? We are the owners of the environment and the project, so we decide the scope!”
Aha. This is where our points diverge. You see, while the organisation does have the overall responsibility in setting the scope for PCI, PCI-DSS also has a guidance document “Guidance-PCI-DSS-Scoping-and-Segmentation” that defines how that scope should include assets and networks and therefore affecting how and where services should be implemented. So for illustration:
Company A says, “Well, we have a payment gateway and a payment switch business. We also have a call center and a merchant business that accepts credit cards through kiosks or direct POS acceptance in our outlets. Now, getting our merchant environment to be certified is going to be a pain. We have decided to just certify our payment switch environment which is isolated in a cloud, and not related to our payment gateway at all which we are just about to launch a few months from now, so there are no transactions yet.”
So there you go, Company A has set their scope and from the outset, it kinda looks fine. Yeah, if these are all isolated environment, it’s ok. In any case, in the report of compliance, the QSA would detail any services offered by the company that are NOT assessed, making clear what are the services NOT PCI compliant for that company.
However, what Company A cannot decide are the services and the assets involved in their scope. There is a method to scoping defined by PCI-DSS and we have written at length in this article here. There are a few ways to minimise the scope by segmentation and so on, but for instance if you run a flat network and insist on it being flat, then everything within that network comes into scope – be it it’s your payment gateway, your merchant business servers, your call center laptops etc. So you can ‘define’ your scope, but what gets sucked into your scope to do hardening, pentesting, patching and all the PCI controls – that is already defined by the PCI on how it’s done. And we just have to identify these assets and systems and networks that get sucked into scope. PCI is a like a giant vortex or blackhole. Everything that is sitting on the same network or touches the systems in CDE, gets pulled into scope.
So there you have it. We will be exploring the final 3 Real Myths of PCI soon, but for now, if you have any queries on PCI-DSS, or ISMS or Theory of Relativity and Blackholes, drop us a note at pcidss@pkfmalaysia.com. Till then, be safe!
Sometime back, PCI-DSS published the Top 10 Myths of PCI-DSS which we debunked in our series of Myths of the Top 10 Myths here. In this article, we are going to jump into the real actual Myths of PCI-DSS and we will explain it as we go along. We are not going to touch on the original myths published by PCI Council, but this is really very much based on our experience in PCI-DSS for more than a decade here in Malaysia, and what we often hear companies going about.
Often this misinformation is because the client facing PCI-DSS finds it hard to dissect all the information needed for the standard. Unlike standards like ISO27001, PCI-DSS is like a journey with different routes to the same destination: PCI Compliance. There are 3 separate destination for PCI – Level 1 Certified with QSA, Level 2 Self Assessment with QSA/ISA signoff, and Level 2 Self Assessment with Self Sign off (no QSA, no ISA signoff). Of course if you are a merchant, then you have level 3 and level 4, but those are the same as the third iteration where you signoff the SAQ on your own without involvement of QSA/ISA.
But while the destination itself can be clarified, the whole process to obtain PCI can be convoluted. Some clients are told by their banks, that because they do not store credit card, they are considered SAQ level 2. Or some are told because they have a website, they must do ASV scans. Or some are told that QSAs must be involved in everything. Some are even told, that local QSAs must be hired, and not any other QSAs. Some are of the opinion that PCI is a license they need to purchase, or a training they need to do. And some are of the opinion that the ASV scan will make them PCI compliant.
Hence, it’s easy with all the above misinformation and more, that customers get frustrated with the expectations of PCI. When they hear a level 1 certification may set them back 15 – 20K USD or more, or that it would take them 6 months or so, they balk at it. It’s funny because often I would start my sales pitch by saying: “At the end of our conversation, it would be goal to try to get you to avoid getting services from us if possible.” Because it’s essentially true. Our job at the beginning isn’t to peddle services or consulting or audit that our clients may not need. Our goal is to provide them with enough information of PCI-DSS so they can make informed decisions. And yes, even if those informed decisions would be that they can avoid PCI, or do their own SAQ without any consultation or ASV scans or certification, or get exemption from their banks/customers or anything else that can lower their requirements for PCI-DSS. And yes, many people who have called us actually just pay us by saying ‘thank you’ and we never hear from them again. Because as advisors, it’s better we start doing the right thing at the very beginning instead of focusing to sell services that customers do not need. This philosophy has been adopted from the start of our company – which is one of the reasons why I failed so miserably in my previous corporate role as regional head of professional service sales. Or also why I was once told off by a potential business partner that I was a poor sales person and that he preferred to work with an organisation with someone better handling sales. Ah well.
So here are some of the top REAL myths of PCI-DSS that needs to be debunked, burned, destroyed and thrown out of the window for the garbage that it is.
1) All PCI-DSS Projects Require ASV Scans
2) ASV scans makes you PCI compliant
3) All PCI-DSS requires (local) QSA
4) All PCI projects are the same (One Certificate to Rule them All)
5) All PCI-DSS services must be outsourced
6) All service providers MUST be certified to do implementation services
7) PCI scope and application of controls can be determined by the customer
8) PCI-DSS gets easier and cheaper every year
9) A company is considered PCI compliant even after the expiry of certification, due to 90 days grace period from the council
10) If the company is an ISMS certified company, they have already complied to 90% of PCI-DSS
So there is quite a bit of stuff – some may be half truths and other are utter nonsense – we need to uncover, likely will need to break this article up into two parts. Let’s jump into it.
Real Myth 1: All PCI-DSS projects require ASV scans
This myth is often peddled by those who are selling ASV scans as part of their service. Don’t get me wrong, we also do ASV scans through our ASV partners for sure, but you can’t go around town telling people that all PCI requires ASV scans when it doesn’t! Read SAQ A. Read SAQ B. You don’t see ASV being mentioned anywhere in the SAQ except for this portion in Part 3a:
ASV scans are being completed by the PCI SSC Approved Scanning Vendor (ASV Name)
And under “PCI DSS Self-Assessment Completion Steps”:
Submit the SAQ and Attestation of Compliance (AOC), along with any other requested documentation—such as ASV scan reports—to your acquirer, payment brand or other requester.
The thing is, if you go through each control under the SAQ, the ASV control 11.2.2 isn’t mentioned, so therefore it’s not required. It’s highly frustrating to us, especially when travel agencies for instance who are just doing EDC terminal business (SAQ B) that connects directly via cellular or phone line to acquirer coming to us and asking us to quote for an ASV scan for their website. We tell them, you don’t need to do ASV scan for your website unless its in scope. You can force us to sell to you, but it’s against our moral code to sell you stuff you don’t need. We take a look at it, find its a simple site with only information and they tell us, “Well, their PCI advisor previously told them to scan their website.” No. You don’t need to. Don’t waste your money, and don’t do it unless you have a website in scope or you are doing an SAQ requiring ASV scan or you consciously make a decision to do it out of best practices and security requirement – NOT as a mandatory PCI-DSS activity.
So, please, take a look. Even SAQ A, usually adopted by e-commerce sites that redirects to a payment gateway for card input – where there is likely a website, the myth is that ASV needs to be done. Read SAQ A. Again, no requirement for ASV scan. You can still do an external scan for security purpose, but strictly for compliance? No. Not needed, unless requested specifically by the acquirer.
And yes, we do have ASV scans as part of our service. But that shouldn’t make us charlatans peddling services to customers when it isn’t mandatory. If the client still wants to pick it up, ok, fine – but don’t say it’s compulsory when it’s not!
I won’t repeat what we have said there but by far, this is a myth that gets peddled a lot. One, sadly, is because the propagation of this nonsense seems to be acceptable by banks. I hear: “Oh, no problem, the bank says all we need to do is to run an ASV scan on our website.” I interject: “Wait sir, you aren’t doing that e-commerce business. You are doing a call center with virtual terminal payments..” <Click> <Dial tone due to hang up>
So there you have it : companies and merchants that have no business doing ASV scans , but using ASV scans as a means to ascertain PCI compliance. We get this even weirder ones when we are trying to obtain an AoC from one of our client’s service providers and they pass us their passed ASV scan report. We ask what the heck that is and they go – that’s our PCI compliance, so please shut up and stop bothering us. And it’s so difficult to go out and explain to them that whoever told them that, is wrong, and they have to go through the actual PCI compliance, which their wonderful ASV scan may (or may not) be part of that overall PCI Compliance.
Real Myth 3: The Auditor (QSA) must be Local
This is one of the strangest myths ever.
We get calls from customers going, “Is your QSA a Malaysian?” And I go, “No, we work with our partner QSA, from India, US or Singapore”. And they go, “Well we want a Malaysian QSA.” And I ask, “Why?”, and most of them are not able to ascertain why they need the QSA to be local, except that it may be a requirement checkbox in their document or policy.
Ok, I can’t argue with your policy, if you have nationalist preferences to your auditors for whatever reason. But it’s not logical for companies to have that requirement, that only local QSAs must be used. PCI-DSS never stated that. In fact, its preferable to have a QSA with regional/global experience as opposed to a local QSA. If PCI-DSS had this requirement for local QSAs to carry out audits, how can QSAs then say they have ‘regional experience’? You see the conundrum? You want an experienced QSA company, yet you want a QSA that is only local. If every enterprise in the world thinks that way, how would QSAs have regional/global experience? By that argument, then all QSAs would be local to that country – not just Malaysia – but each country would only have QSAs auditing in that country and nowhere else. And immediately you can see the fallacy and illogical argument attached to this myth. But this myth still prevails, for whatever reason (we sort of know the reason actually).
PCI-DSS requires a lot of experience. The last thing we need is a QSA with only a handful of experience and no operational idea of how to run things or recommend solutions and just rely on a checkbox and some cute marketing gimmicks. I’ve seen plenty of good auditors overseas, a whole lot better than the local ones I come across and vice versa. “Local QSA requirement?” It could be peddled by local auditors attempting to block off better equipped, or even cheaper auditors from overseas (better or worse) and really narrowing the options for their clients, who would be hemmed in by such requirement, thinking its a PCI-DSS requirement. It’s not.
If you mean by local support- that they can respond faster since they are local, then, yes, there is some sense in that. If you mean they are cheaper compared to a guy in US, then yes, but let that be a commercial decision and not a technical one. Sometimes even overseas (good) QSAs can be cheaper. Local support I agree, 100%. Nothing is more frustrating than sending a message to someone and them taking 24 hours to reply due to them being in another timezone. Local presence, local support – yes. But they technically don’t need to be a QSA. They could be consultants and there is a very good case in that. We noted it here in this article “PCI-DSS – So Why Aren’t We QSA?”. We consciously made a decision NOT to be a local QSA a few years ago to avoid possible conflict and to support our clients a lot easier and not to be bogged down by auditor responsibilities in PCI. QSAs are a busy and itinerant lot. Aside from handling other audits, writing reports, they also need to be careful of overstepping their independent role by advising and implementing for their clients and then auditing this same control they devised.
There is really, if you come down to it, no perceivable value in saying having a “local QSA” is better or not. Having local support throughout the PCI-DSS compliance is important – and whoever is supporting should have at least the same or more knowledge than the QSA.
In some QSA Companies, they have a set up to differentiate the auditor and the consultant. Whereby the consultant is different from the auditor to ensure there is more independence. We have the same set up – PKF is the consulting arm and we deal mainly with implementation, testing and assistance of our client to get past PCI. The QSA is well, the QSA in this case, and they can do their audit without being too involved in the implementation. We know as much (and if not more, sometimes) than the QSA due to our operational experiences, and this puts us in a better position – conflict free- to get our clients certified.
So, no, in this opinion, there is no real value or even PCI requirement in having a local QSA, because that generally does not make sense and is counter-intuitive to peg a customer to only select local, less experienced auditors. Most QSAs can (and should) be able to do regional or even inter-regional work because a QSA Company, by its very nature is a regional or global company anyway (QSA pays to be auditors based on regions, and not country specific). Again, while our opinion may be biased because of the strategic decision we made years ago, we made that decision with all these considerations in mind.
Select the best QSA option based on experience, pricing and quality, not because they are local or non-local.
Real Myth 4: All PCI projects are the same (One Certificate to Rule them All)
A customer once said that we didn’t have much value and all we did was to forward their emails to the QSA for validation (not true). He said he had his team done PCI across other countries and we were just making it more complicated than necessary since they have already been experienced, implying that we hoodwinked them.
It’s very difficult to talk to people who are in this position because you can see from the onset, they do not support outsourcing advisory and consulting and they have a personal vendetta against this profession. So we don’t need to speak reason to them. In this case, we decided to pull out of the deal for advisory and all other works of implementation except for the ASV scans.
Two years from starting their PCI project on their own, and they are still in the wilderness. We ended up supporting them in any case, and perhaps their thought process had somewhat soften now because we are now finally seeing the end of the project, with us (ironically) leading them to it.
And their ‘experience’ from other PCI compliance projects? Different experience. Some were basically e-commerce SAQ A, A-EP type, some were their retail arm SAQ B or B-IP. But what they were doing in Malaysia was the outsourcing, call center and BPO – all of which involves credit card storage, processing and transmission.
Not all PCI-DSS projects are created equal.
Another company employed the ‘One Certificate to Rule Them All’ philosophy. They were providing warehouse storage facility to one of our clients, essentially storing physical copies of forms containing credit card information. So, this is a service provider, providing storage that needs to be assessed for their physical security.
They immediately told us they are already PCI compliant and they will send us the certificate. We insisted on AoC but they obliged us with their ‘certificate’ anyway, emblazoned with their QSA logo proudly, stating – SAQ C-VT Certified.
Huh? What has SAQ C-VT (merchant SAQ) got to do with the warehouse storage you are offering to my client?
Apparently that SAQ C-VT cert is from one of their parent companies overseas or something and has as much relation to our current project as me running to become the president of the United Sates. It means, One Certificate 100% does not rule them all. It’s a completely different business function and you can’t just use another SAQ or AoC from another parent/child company that is selling ice-cream cakes and had their call agent processes certified and say this applies to your warehouse storage facility half a world away!
Ok, we are halfway there, bear with us. Writing all these myths really can drag an article and you can probably read the frustration oozing out each paragraph. I’ll admit, we get extremely frustrated, but we also must remind ourselves – most of them (customers, banks – NOT QSAs, they don’t get any free passes for giving misinformation!) do not know better and they are just doing what they think it’s right or what they have been told by so called consultants or QSAs. That’s why we need to set their paths correctly so they know what options are there before them. So, we need to stop getting frustrated and blaming them for bad decisions, and get more involved in educating and providing information so they can make good decisions.
We will continue the next time once we catch our breath and go through the other wonderful misinformation on PCI-DSS we have heard over the years. Till then, drop us a note at pcidss@pkfmalaysia.com on anything to do with this standard or other standards like ISMS/ISO27001 etc.
When we started out with Alienvault years ago, they were just a smallish, start up company and we worked directly almost with the engineers and sales team in Cork. Of course, a lot has changed since AT&T took over, but during the early days, there were a lot of knowledge and mindshare done directly between us and them. So much so that if you were to check their partner site, they still list us as the only Malaysian company as their reseller, due to the early days of listing. What attracted us to the product was that we could lift the hood and see what was underneath. Alienvault (or OSSIM) was previously a hodgepodge of many working parts that were glued together and somehow made to work. The agent was a product called OSSEC, which is an open-source HIDS. The IDS is Suricata/Snort and if you look closely at the availability tool, you would see the backend is a Nagios running. NFSen is used for their netflow data display, and PRADS for their asset discovery. OPENVAS is their vulnerability scanner and best of all, they allow you to jailbreak the system and go into the OS itself and do what you need to do. In fact, most of the time, we are more comfortable on the command line than through the actual UI itself.
The history aside, the downside of adding in these different applications and getting them all to play nice together, is that you would have to understand the interworkings of these pieces.
For instance, if you were to send logs via Syslog to Alienvault, you would have to know that the daemon rsyslog (not an Alienvault product) is the one being used to receive these logs. If you were to use the agent, then the application receiving these logs is different – it’s the OSSEC server that receives it. So it depends how logs come in, and from there you can decide what you wish to do with it.
The challenge is oftentimes to filter and ‘massage’ the logs when it hits Alienvault. There are a few approaches to this:
The basics are at stage 1 where the client (server, workstation etc) send logs (or have logs to be collected) to Alienvault. The initial filtering should theoretically happen here if possible. Many applications have the capability to control their logs – Windows server being one of them. Turning on debug logs on Linux for instance would cause a fair bit of log traffic across the network. Applications as well, have options of what to log and what not to log. We see firewalls logging traffic logs, proxies logging every single connection that goes through – this causes loads of logs hitting the Alienvault.
AV (especially the All In Ones) isn’t designed to take on heavy loads the way Splunk or other enterprise SIEM like ArcSight, that chews through 100,000 EPS like Galactus chews through planets. The AV approach has always been, we aren’t a SIEM only, we are a unified security management system, so security logs are what we are after. Correlation is what we are after. APT are what we are after. Their philosophy isn’t to overload and do generic Business Intelligence with millions of log lines, but to focus on Security and what is happening to your network. That being said, it’s no pushover as well, being able to work with 90 – 120 million events and going through 15,000 EPS on their enterprise.
The reality however is that most clients just turn on logs at Item 1 and plow these logs over to Alienvault. So it’s really up to Alienvault to start filtering these logs and stopping them coming in. At layer 2, is what we call the outer layer. This is the front line defence against these attacks of logs. These are where the engine running these log systems (OSSEC, rsyslog etc) can filter out and then trickle what is needed to Alienvault main engine itself in Layer 3. The AV main engine also has its form of defence, in policies, where we can create ‘junk’ policies to simply ignore logs coming in and not process them through the resource intensive risk assessment calculations.
So, we are going to assume that Layer 1 filtering wasn’t done. What we are going to look at is sorting out Layer 2 and we will assume that logs are coming in via OSSEC. We will have another article on Rsyslog filtering because that is a whole different novel to write.
When it hits OSSEC, it’s going via default port 1514/udp. Now remember, when logs first enters Alienvault, it doesn’t immediately go into the SIEM event display. It first needs to be logged, before it can be turned into events, before it can trigger alarms. So the basic rule is to get it logged:
Make sure you are receiving logs first.
This may seem juvenile in terms of understanding but we have been through enough to know that no matter WHAT the client says, oftentimes, their systems are not even sending the logs to us! A simple tcpdump -Xni eth0 “udp port 1514” will see if the logs are getting in, so go ahead with that first to ensure you are receiving. Just add a “and host <ip address>” if you need to filter it by the IP address.
Another way that Alienvault allows, when you are getting logs via HIDS/OSSEC is by enabling the “logall” on USM HIDS configuration, which we covered in the previous articles here. But be aware turning on logall potentially will bring a lot of logs and information into the box so we generally avoid this unless it’s really needed.
Once you are seeing logs coming into Alienvault, for OSSEC at least the next thing to do is to move these logs to “alerts.log” and from there, Alienvault can start putting it into the SIEM display.
For this to happen, you need to understand 3 things here, aside from the fact that we are currently now working on layer 2 from the diagram above – OSSEC:
a) Decoders
b) Rules
c) /var/ossec/bin/ossec-logtest
The above are actually OSSEC terminologies – not strictly Alienvault. What this means is that if you were to decouple OSSEC from Alienvault, you can. You can just download OSSEC. Or you could download other products like Wazuh, which is also another product we carry. Wazuh runs OSSEC (its own flavor) but has a different presentation layer (Layer 3 in our diagram above) and integrates with ELK to provide a more enterprise ready product but the foundation came from the same OSSEC principles. So when we talk about Rules and Decoders and using the ossec-logtest script to test your stuff, it’s not an Alienvault specific talk. Alienvault specific talk we can go later with plugins and stuff. In the actual ACSE course from Alienvault (at least the one I passed 5 years ago), there is really no mention on decoders and rules – it basically just focus on the core Alienvault items only.
At this point, we need to make the decision on whether to have the filtering done on OSSEC level (2) or on Alienvault level (3)? As a rule, the closer the filtering is done to source, the better…however, in our opinion, the filtering by Alienvault plugins is a lot more flexible and intuitive in design, compared to OSSEC (and because we are biasedly trained in Alienvault, but not so much in OSSEC). So for this article (which is taking VERY long in getting to its point), we are tasked to simply funnel the logs into /var/ossec/logs/alerts/alerts.log because that is where OSSEC sends its logs to and where we can get our AV plugins to read from.
The logs in /var/ossec/logs/archives/archives.log (remember, we turned on the logall option in the OSSEC configuration for this illustration) aren’t monitored by plugins. Because in a production environment, you won’t have that turned on. So, once you have logs into the alerts.log file, you are good to go, because then you can sit down and write plugins for Alienvault to use in the SIEM display.
OK – Firstly Decoders. OSSEC has a bunch of default decoders (like plugins in Alienvault) that is able to interpret a whole bunch of logs coming in. Basically, the decoder is set up with Regular expression to go through a particular file and just grab the information from the file and drop it into fields like IP address, date, source IPs etc. Similar to the AV plugin, but for this illustration, we are not going to use much of the OSSEC filtering, but simply to ensure we select the right logs and send them over to the alerts.log file.
So ok, let’s take the previous article example of having MySQL logs into Alienvault. Let’s say we have this example query log coming into our Alienvault (archive.log, if we turned it on)
2021 Feb 21 00:46:05 (Host-192-168-1-62) 192.168.1.62->\MySQLLOG/db.log 2021-02-22T09:41:42.271529Z 28 Query SHOW CREATE TABLE db.persons
So the above doesn’t really offer much, but you can technically see there is the date and time, and the command line etc and a decoder will need to be created to parse the incoming log.
Picking up from where we left off at the Alienvault link, Task 4 covers the steps to create the decoder:
a) Edit /var/ossec/alienvault/decoders/local_decoder.xml and add in the following:
The above is simplistic decoder to catch the 3 important events from the logs coming in from MySQL – Query log, i.e
2021-02-22T09:41:42.271529Z 28 Query SHOW CREATE TABLE db.persons
Connect Log
2021-02-20T16:35:28.019734Z 8 Connect root@localhost on using SSL/TLS
Quit
2021-02-20T18:29:35.626687Z 13 Quit
Now of course, for those aware, the Query logs have many different types of query – Query Use, Query Show, Query Select, Query Set, Query Insert, Query Update and so on. The idea of the decoder is simply to catch all the queries, and we will theoretically log all Queries into Alienvault.
Now, remember to tell Alienvault you have a new decoder file
In the USM Appliance web UI, go to Environment > Detection > HIDS > Config > Configuration.
Add <decoder>alienvault/decoders/local_decoder.xml</decoder> after <decoder> :
Adding this setting enables the usage of a custom decoder. Save it and restart HIDS.
So that’s it for the decoder.
Now, on the CLI, go to /var/ossec/bin and run ./ossec-logtest
Paste the following “2021-02-20T18:29:43.189931Z 15 Query SET NAMES utf8mb4”
And you should the get result as below
linux:/var/ossec/bin# ./ossec-logtest 2021/03/29 09:50:10 ossec-testrule: INFO: Reading decoder file alienvault/decoders/decoder.xml. 2021/03/29 09:50:10 ossec-testrule: INFO: Reading decoder file alienvault/decoders/local_decoder.xml. 2021/03/29 09:50:10 ossec-testrule: INFO: Started (pid: 25070). ossec-testrule: Type one log per line. 2021-02-20T18:29:43.189931Z 15 Query SET NAMES utf8mb4 **Phase 1: Completed pre-decoding. full event: '2021-02-20T18:29:43.189931Z 15 Query SET NAMES utf8mb4' hostname: 'linux' program_name: '(null)' log: '2021-02-20T18:29:43.189931Z 15 Query SET NAMES utf8mb4' **Phase 2: Completed decoding. decoder: 'mysql-query'
So basically, any logs that come into archive.log that has that sample line “Query” you will be lumping it in as mysql-query decoded. Of course you can further refine it with Regular expression to get the exact term you wish, but for the illustration, we want to catch the queries here and it’s fine for now.
The next item is the rules. Again, referring to the Alienvault writeup above, go ahead and edit /var/ossec/alienvault/rules/local_rules.xml.
What we will do is to add the following in
<group name="mysql-connect">
<rule id="192000" level="0">
<decoded_as>mysql-connect</decoded_as>
<description>Connect log is enabled</description>
</rule>
<rule id="192001" level="1">
<if_sid>192000</if_sid>
<regex>Connect\s*</regex>
<description>Connection is found</description>
</rule>
</group>
<group name="mysql-query">
<rule id="195000" level="0">
<decoded_as>mysql-query</decoded_as>
<description>Mysql Query log is enabled!</description>
</rule>
<rule id="195001" level="0">
<if_sid>195000</if_sid>
<match>SET</match>
<description>Query set is found and ignored!</description>
</rule>
<rule id="195002" level="1">
<if_sid>195000</if_sid>
<regex>Query\s*</regex>
<description>Query is found</description>
</rule>
</group>
<group name="mysql-quit">
<rule id="194000" level="0">
<decoded_as>mysql-quit</decoded_as>
<description> Quit log is enabled</description>
</rule>
<rule id="194001" level="1">
<if_sid>194000</if_sid>
<regex>Quit\s*</regex>
<description>Quit command is found</description>
</rule>
</group>
So what the above does is to decide what to do with 3 types of MySQL logs you are getting: Connect, Query and Quit. We want to dump these logs into alerts.log so that we can work on it with Alienvault’s plugin. We don’t want to do any fancy stuff here so it’s pretty straightforward.
Each of these 3 have a foundation rule
a) Connect – 192000
b) Quit – 194000
c) Query – 195000
Each rule has a nested rule to decide what to do with it. Notice you can actually do Regex or Match on the rules which really provides a lot of flexibility in filtering. In fact, if it wasn’t for Alienvault’s plugins, OSSEC’s filtering would probably be sufficient for most of your custom logs requirement.
For this illustration, our job is simple – for each of these rules, find out the key word in the log, and then escalate it to an alert. An alert is created when you create a rule ID with level = 1, i.e <rule id=”195002″ level=”1″>
If you run ossec-logtest again, and paste the log there, you would be able to see
**Phase 1: Completed pre-decoding. full event: '2021 Feb 21 00:46:46 (Host-192-168-1-62) 192.168.1.62->\MySQLLOG/db.log 2021-02-22T09:42:21.711131Z 28 Quit' hostname: '(Host-192-168-1-62)' program_name: '(null)' log: '192.168.1.62->\MySQLLOG/db.log 2021-02-22T09:42:21.711131Z 28 Quit' **Phase 2: Completed decoding. decoder: 'mysql-quit' **Phase 3: Completed filtering (rules). Rule id: '194001' Level: '1' Description: 'Quit command is found' **Alert to be generated.
Once you see “alert to be generated” you will find that same alert in the /var/ossec/logs/alerts/alerts.log
From there, you can go about doing the plugins and getting it into the SIEM.
Whew. That’s it.
You would notice, however, there is another sub-rules in there for Query:
<rule id="195001" level="0">
<if_sid>195000</if_sid>
<match>SET</match>
<description>Query set is found and ignored!</description>
</rule>
This is set above the “alert” rule and you notice that this is Level=0. This means whatever Query that is decoded, first runs this rule and basically if I see there is a Query “SET”, I am going to ignore it. I.e it’s not a log I want and I am not going to put it into the alerts.log. Level 0 means, not to alert.
I am ignoring Query Set because in this case, we are finding millions of query set as it is invoked a lot of times and mostly it is false positives. I am interested in Query Selects, Inserts and Updates etc.
Once you have this rule put in, it will filter out all Query Sets. This is basically the only filtering we are doing so we don’t have those millions of Query Sets jamming up my alerts.log file in Alienvault.
alienvault:/var/ossec/logs/archives# ossec-logtest 2021/03/14 12:36:33 ossec-testrule: INFO: Reading decoder file alienvault/decoders/decoder.xml. 2021/03/14 12:36:33 ossec-testrule: INFO: Reading decoder file alienvault/decoders/local_decoder.xml. 2021/03/14 12:36:33 ossec-testrule: INFO: Started (pid: 12550). ossec-testrule: Type one log per line. 192.168.1.62->\MySQLLOG/db.log 2021-03-14T16:22:58.573134Z 19 Query SET NAMES utf8mb4' **Phase 1: Completed pre-decoding. full event: '192.168.1.62->\MySQLLOG/db.log 2021-03-14T16:22:58.573134Z 19 Query SET NAMES utf8mb4'' hostname: 'alienvault' program_name: '(null)' log: '192.168.1.62->\MySQLLOG/db.log 2021-03-14T16:22:58.573134Z 19 Query SET NAMES utf8mb4'' **Phase 2: Completed decoding. decoder: 'mysql-query' **Phase 3: Completed filtering (rules). Rule id: '195001' Level: '0' Description: 'Query set is found and ignored!'
So you see, from the above, all Query Sets are ignored. You can basically do whatever you wish by using either Regex or Match and ignore certain log messages from OSSEC itself. It’s very powerful and flexible and with enough time and effort, you can really filter out only the needed logs you want into Alienvault, which is really part of the fine-tuning process for SIEM.
So there you have it. What you have done now is to take those logs from archives.log and make sure you only put the logs you want in alerts.log (Quit, Connect, All Query except for Query Set).
The next thing you need to do is to go down to Alienvault (layer 3) and do the heavy lifting in writing plugins and get these events into the SIEM display.
For more information for Alienvault and how it can help your compliance, send us an email at alienvault@pkfmalaysia.com and we will get back to you ASAP!
So from our previous article we have gotten Alienvault (or OSSIM) running in your own Virtualbox, and it is able to communicate with the host (your laptop). Again, the reason why we have this is for a small mini lab that you can just shoot up without worrying about doing a VPN connectivity back to the office or whereever, and just do a very basic troubleshooting or learning. It’s highly useful for us, especially we deal a lot with custom applications plugins where we need to either filter or interpret using Alienvault.
So the objective here is to first get MySQL installed and running, and then have MySQL start logging. Now, for the sake of standardisation, we are going to install MySQL Community Edition. Instead of detailing all the complexity of Windows installation, we will keep it brief: Download, click, wait, done.
The more detailed link is as below, but in all honesty, there is nothing overly difficult with clicking on the windows installation file. Locating where you downloaded it is probably a bit more difficult that the actual installation itself.
Once it’s installed, (we just installed it as a service), verify that its up and running by going to the Windows services and checking for MySQL80. Or you could just run netstat – an and find the following:
TCP 0.0.0.0:3306 0.0.0.0:0 LISTENING
Next, for the sake of laziness, you probably want to add the MySQL Bin path to your windows environmental variable path: C:\Program Files\MySQL\MySQL Shell 8.0\bin\. Just go System Properties ->Advance -> Environmental Variables.
Once that is done, you can run the command
mysql -u username -p
You should be prompted with a password and you can enter that and you should be in.
Now, we are not going to go through mysql CLI commands as this isn’t the point of the article. The point here is to create some sort of logs from MySQL and fetch those logs over to Alienvault. There are many ways to do it, and it seems for windows, the easiest would be to just dump it into the event viewer and let HIDS go and fetch it. But we don’t like to do things the easy way because we are technology sadists. So the idea here is to log MySQL to a flat file and get HIDS to grab it and get Alienvault to interpret it.
We are a big fan of the above plugin, as most of our clients tend to end up with MySQL Community Edition, which means some plugins like the official MySQL Enterprise Audit Plugin is not available for cheapskate like us. There is the Percona Audit plugin as well which we have not tried but it seems very much focused on Percona. There is also the McAfee plugin which we tried but after a bit of tinkering decided we were probably too stupid busy to make it work. So we were left with the MariaDB plugin which we got it to work for our client.
It’s still a good read but it has been a few years old. And we will definitely relook into it in the near future.
This time around, we are going to get MySQL Windows version to write the general query log into a flat file instead and have HIDS pick it up. This provides us with a few ideas of how HIDS/Alienvault can be configured to pick up any flat file, which gives you pretty much God-like powers in terms of being flexible in getting logs to your Alienvault. If we can get any flat file and create events from those, the possibility to integrate with any custom applications is endless.
To start you need to be aware of two things:
a) There is already a native logging capability in MySQL CE to log to a flat file which we will be using for illustrative purpose: the all powerful “General query” log. Why we say illustrative is that this isn’t probably a long term solution as it’s akin to turning on a debug on your app. There is a LOT of logs, because every query is logged. Useful for troubleshooting, not so cool if you have a busy server because it grows pretty quickly.
b) Windows doesn’t have a native way to send logs, except by way of WEF (Windows Event Forwarder) which basically just sends logs to a collector (windows system). It seems like an awfully clunky way to do log centralisation, so its probably better (still in 2021!) to use either a forwarder like NXLOG or install OSSEC (HIDS) as an agent to talk to Alienvault.
So for this article, we will start by enabling general query log on your Windows MySQL instance.
mysql> set global general_log_file='C:\MySQLLOG\db.log'; Query OK, 0 rows affected (2.39 sec)
mysql> set global log_output = 'file'; Query OK, 0 rows affected (0.00 sec)
mysql> set global general_log = on; Query OK, 0 rows affected (0.00 sec)
mysql> show variables like '%general%'; +------------------+----------------------------+ | Variable_name | Value | +------------------+----------------------------+ | general_log | ON | | general_log_file | C:/MySQLLOG/db.log | +------------------+----------------------------+ 2 rows in set (0.01 sec)
The above series of commands basically just tells MySQL to turn on general log, and then say where the the file is located and instruct it to be a file. You can verify if it is set by the last command.
For more persistence, you can go ahead and and edit C:\ProgramData\MySQL\MySQL Server 8.0\my.ini
And include the following under the header General and Slow Logging
#General and Slow logging. log-output=FILE general-log=1 general_log_file="C:/MySQLLOG/db.log"
Restart your service and you should be able to see if there is a db.log file (You can name it anything).
Try to do a login with your mysql console and you should be able to see a “Connect” log and a few query logs in the flat file.
Now, the next thing is to find a way to forward these logs over to Alienvault. So you could use NXLOG (we covered that in detailed in a series here) However, that topic has been beaten to death over that series so we won’t look at that option for now.
So for the sake of brevity, the Part 1 link should be very straightforward to follow. Install OSSEC HIDS into your laptop (which is also your MySQL server), but of course, change the OSSEC config to reflect the proper path and file of the flat file log of MySQL that you just created here.
So the conclusion of it is this:
a) You have a running MySQL server and you are able to query and log to it. I would suggest at this point to install a MySQL gui like HeidiSQL or PHPMyadmin to interact with your database. If you are fine with Workbench or CLI, then go ahead. For me, I like HeidiSQL because its clean, and very useful for simple testing and querying.
b) Your MySQL is logging into a flatfile of your choosing, every single log. Again, since this is a test, it’s fine. For a live server, be aware to have a cleanup script in place to ensure your general query log doesn’t over grow.
c) You have HIDS (OSSEC) – we use both interchangeably, so you know – installed on your laptop server, and it’s configured to pick up the logs from the flat file of MySQL that you have configured
d) On the OSSIM (Alienvault) – we use both interchangeably, so you know – on your Virtualbox, you have enabled HIDS to logall so the raw log is now dumped into archives.log (which we would recommend to remove the logall directive once we finish configuring, since this file will also grow quickly on live environment).
At this point, if you were to open up the /var/ossec/logs/archives/archives.log of your Alienvault, you would observe that the format of the log coming in is
2021 Feb 22 09:41:42 (Host-192-168-1-111) 192.168.1.111->\MySQLLOG/db.log 2021-02-22T09:41:42.271529Z 28 Query SHOW CREATE TABLE dbtest.persons
Compared to the log from the database file itself
2021-02-22T09:41:42.271529Z 28 Query SHOW CREATE TABLE dbtest.persons
So it is actually word for word, the actual log itself. That’s great but aside from centralisation, it’s actually not doing anything (sort of like Batman in Justice League). It’s just there, with no purpose.
In our next article (which we will call Flat file log capture part 2), we will break down the structure in which logs flow into Alienvault and how ultimately we can get these logs to eventually be seen in the SIEM event.
Ok, it’s been a while since we last written anything close to being technical, so let’s quickly get to it.
Many times, we would like to get some quick testing done on an Alienvault box but we don’t have any at hand. Some of these tests are for example, to validate a plugin you just wrote, or to check on a config that you would want to implement for your client but don’t want to test it there. There are many scenarios where you’d just want to fire up a simple box and do some testing. One way is to set up an Alienvault in your office and a couple of servers to run as test systems. A simple VPN in and voila, you are done. But what if you wanted to simulate logs, but don’t have the necessary systems to do so, or rather not change any production systems you have at hand?
One way is to install Alienvault on virtualbox on your laptop, and either simulate logs from other VMs to it, or just get your host laptop to send the guest Alienvault logs. Virtualbox is probably the easiest way to get it done. You don’t need to set it up as an extremely powerful system, since mostly you would be doing testing on it. For me, simply using it for plugin verification, decoder, rules set up and simple log testing was enough. I set the VM up for 2 processors, 8GB RAM, 30GB storage (fixed) and downloaded the OSSIM image to set it up.
To further simplify, we followed this excellent tutorial here. Kudos to the writer for the details. So the idea was two fold, to get our host talking to OSSIM and for OSSIM to be able to go to the internet. The trickiest part of this relatively simple setup is to get the networking sorted.
OSSIM allows two interfaces to be setup in theory – whereby one is used for management interface and the other for log collections. The slight difference here is that I’ve set up the eth0 as ‘bridged adapter’ and selected my laptop’s wireless adapter, and in theory this should allow internet access as required. The second adapter in theory, wouldn’t really be needed, as it’s generally used if you are accessing it from internally (let’s say you set up Virtualbox in a separate box and you access it from an internal network). But because I am building it all in one (everything in a single laptop), I don’t need that second interface as I can just access my management interface through my logging interface. So go ahead and just set up eth0 as bridge and later on, assign the IP to be the same network as your laptop’s network.
The strange thing you may experience from time to time is that you may not be able to SSH into your OSSIM for some reason. It could be an IP conflict or your ARP needs to be updated, especially if you have other systems with the same IP. So you could try just pinging from the server to the host, and host to the server and that may resolve that issue.
Now you are done, go ahead and access your OSSIM. In our next article, we can start a very simple tutorial of setting up a MYSQL database (also within the same laptop), writing a log file to a file and getting Alienvault to pick up the log file via HIDS.
The opinions expressed by our writers and those providing comments are theirs alone, and do not necessarily reflect the views of PKF Avant Edge Sdn Bhd. PKF Avant Edge Sdn Bhd is not responsible for the accuracy of any of the information supplied by our writers.
The material on this site is for general information purposes only and should not be relied upon for making business, legal or other decisions. We make no representations or warranties of any kind, express or implied about the completeness, accuracy, reliability, sustainability or availability with respect to the website or the information, products, services or related graphics contained on the website for any purpose. Any reliance you place on such material is therefore strictly at your own risk.
Certain links on this website will lead to websites not under the control of PKF Avant Edge Sdn Bhd. When you activate these, you will leave our site and we have no control over and accept no liability in respect of materials, products or services available on any website not under our control.
To the extent not prohibited by law, in no circumstances shall PKF Avant Edge Sdn Bhd be liable to you or any third parties for any loss or damage arising directly or indirectly from your use of or inability to use, this site or any of the material contained in it.