Author: pkfavantedge (Page 22 of 37)

PCI-DSS and the SAQs that suck

pci-compliance

While a good part of our PCI business is providing level 1 certification to service providers, we also have provided the same to Level 1 merchants. Where we are seeing a big need for advisory is in the Level 2 Service Providers, or the Level 2,3,4 Merchants. This is because they generally fall under the SAQ category. SAQ = Self Assessment Questionnaire.

Now, I am not going document what these SAQs are, or their individual applicability and requirements  – there are 2,345,565 sites so far that do this, so go ahead and google it – these sites do a great job in presenting the SAQs in a far more structured manner.

What we are going attempt to do here is jump right into it with the assumption you have some familiarity with these SAQs and you are as frustrated as most of our clients with it and want to find out why these SAQs suck so bad.

Well, mainly, there are a lot of ’em. PCI-DSS isn’t a guideline or framework like ISMS/COBIT. It’s a bunch of standards (some excellent in terms of making sense, some not so) that range from ‘oh that’s easy’ to ‘That is going to cost me a bomb’ sort of thing. So, different SAQs apply to different business. Each type of business have somewhat a different journey in PCI – the online mall with e-commerce vs the restaurant chain that has 100 branches nationwide etc.

We are going to focus where most the confusion happens: The SAQ A vs SAQ A-EP. Note that these SAQs apply to mainly e-commerce customers. So if you are doing mainly e-commerce business (we can go into POS issues later) – then it’s either SAQ A, SAQ A-EP or SAQ D-MER.

Now there’s a bit of history here – previously e-commerce companies that do online transaction with credit cards have two choices: SAQ A – which has a breezy 14 or so questions (now updated to around 20) or SAQ D – which jumps to the full monty, i.e 300++ questions covering the full 12 requirements of PCI. There is no middle ground. It’s like you are doing a weekend hike up your neighbourhood hill with your 5 year old son and suddenly someone tells you you are climbing up Mount Everest next. You can imagine merchants doing two things: they tear their hair out doing the SAQ D, or they just work on SAQ A, whether they qualify or not. More on the qualifying later.

So now, recently in the newer versions, PCI says, “Aha, let’s give these guys a break by introducing SAQ A-EP”. The ‘EP’ here stands for Ecommerce Payment, we assume. The problem here is that PCI Council, while trying their best to clarify who can or cannot be SAQ A, A-EP or D, only serves to make things even more confusing.

Your goal – if you are an e-commerce merchant – is to do your best to end up with SAQ A. Because it is the easiest. More importantly, it’s the cheapest. You don’t need to do any ASV scans, or pentest or all the kebabs that come with doing SAQ A-EP or SAQ D. The list of questions in the recent version increases from 20 to around 190 to 340+, when you go from A -> A-EP -> D-MER. That’s a difference between a days work to probably one to two months to a full five to six months.

PCI generally have a lot of documentation on SAQ A and A-EP and when to use it etc.We have provided a few good links below the article.

PCI generally slice the e-commerce implementations into 4 broad categories and in a layman description below: (for more technical explanation, google the words below with PCI appended to it and there should be some good sites coming up that explain in more detail):

a) Redirect – SAQ A

When you (customer) click on checkout with credit card after selecting your favourite golf clubs to buy (or high heels, whichever your fancy), you suddenly get a message saying, exiting www.ecommercemerchant.com, redirecting to PaymentProcessorName. This usually is a popup, or if not, another tab, or just a pure redirect. Now you see another page stating its the payment processor, and here is where you enter your card details (name, PAN, CVV etc). After entering, and being authorised, you are dropped back into the merchant page. The merchant has no idea of anything you have typed into the payment page.

b) IFRAME – SAQ A

Not as common, at least in our experience. This is when we click checkout with credit card, the page is still with the ecommerce merchant, but an iframe is loaded. An iframe is basically a page within a page, a child page that belongs to another site. It’s like a dream within a dream concept from Inception. So the merchant page now loads the payment processor page WITHIN its own page. The entire code for iframe is controlled by the payment processor. Even the code to Call the iframe is given by the processor. As far as the merchant is concerned, it’s like a redirect, a sleight of hand, it’s prestidigitation! (In the words of OZ). This is advantageous from a customer experience perspective as the customer feels that they are still with the merchant instead of being sent to another shop to make payment. The problem is, like everything, IFRAME is hackable. Here is a good rundown recently of an IFRAME hacking incident.

c) Direct Post – SAQ A-EP

OK – this is the one we see most (aside from the first). A lot of customers think they are doing a), when in fact they are doing c). Basically the form where we type in our Payment information is sitting with the merchant, and once we click submit, then it connects to the payment gateway and sends all the information. The payment detail collection page sits with the merchant.

d) Javascript – SAQ A-EP

We hardlysee this around, but even if we do, and if we are not firing up our developer tools, we probably won’t know. Basically, when we load the payment page on the merchant website, the processor page talks directly to us, the customer. The processor sends Javascript to our browser and our browser magically creates the payment form, which we happily fill in and send it back to the processor. Generating via Javascript has its advantages – dynamically it can fill in some parts of the form depending on where the client is, or basically improve the user experience overall. But again, a malicious code can be executed instead and instead of sending over to the payment processor, you might end up sending over to your friendly neighbourhood hacker.

The other scenarios falls into the bottom catch-all of SAQ D.

The confusion is added when in the SAQ A-EP document, it states: “Your e-commerce website does not receive cardholder data but controls how consumers, or their cardholder data, are redirected to a PCI DSS validated third-party payment processor”

So my question is, wouldn’t the fact that the merchant site is controlling the iFrame code or the actual redirection make it fall under “Controls how consumers are redirected” caveat there? It does. So much so that PCI Council issued a statement here at their FAQ site: https://www.pcisecuritystandards.org/faqs. Just type in “1292” and you should see the article reproduced.

Basically they go a long about way saying that yes, they understand that iFrame and Redirect falls under that SAQ A-EP caveat but they are willing to allow SAQ A to be used in this circumstances because “in the payment brands’ experience these are detected before significant volumes of cardholder data are lost. The Council is working with Payment Service Providers to encourage tamper-resistance and tamper-detection which will also reduce the viability of a MITM-type attack.”

As we can see from the IFRAME hack, it’s not really that trivial to pull off as you do require some knowledge of the transaction ID and getting it from the payment processor. Like all man in the middle attack, it does take some skills to pull off and massive removal of credit card details is harder as each transaction ID is unique. It’s a lot easier getting a malware into a POS device and siphoning the credit card information there a’la Target breach a few years ago.

So you see, it really depends on how you code or implement your e-commerce site. We have seen many companies underscope themselves by doing SAQ A when actually they had to do SAQ A-EP. Worse, we have also seen some ecommerce merchants forced to go trough SAQ A-EP or even D when they can qualify for A. These are usually directed by their acquirers – either banks or gateways with little knowledge of SAQ and who somehow just randomly decide that all merchants must suffer through SAQ D. It’s like being jailed for 50 years for stealing an acorn from a squirrel. Maybe in Norway it’s a law, but it sure as heck not here!

And now we are seeing some really strange permutations coming from acquirers – some of our clients are told to do SAQ A, but must to ASV scans. What? If it’s SAQ A, it’s SAQ A. Done. Why ASV scans are needed? Oh – because the acquirer says so. Well, the PCI Council doesn’t say that, so what we are doing isn’t PCI requirement. And even one case whereby the company said, yes, the merchants need to do ASV – but hey, because their risk management approves it, only need to do twice a year, not four times a year. Wait – PCI still requires at least per quarter though.

And the best one we have heard in an RFQ so far – the requirement is to “ensure that company must get Level 1 certified and become member of PCI Council.”

Become a member of the PCI Council? How?

I don’t really blame the companies for misinterpreting actually. I mean, if you look at the amount of documents PCI forces us to go through, it’s like asking people to read War and Peace six times. In Russian. When you are not Russian.

So, it’s really our job, whether QSA, ASV, PCI-P or consultant, to generally stop, take a breath and try to get this PCI education going.

This is a long post. I haven’t even gone to live demo of actual sites doing the 4 things listed above (Redirect, Direct Post, IFRAME, Javascript). I usually do that during our PCI-DSS training but I will try to give some examples in the next few articles.

If you are interested in PCI-DSS training (HRDF claimable), a free PCI scoping or any PCI services like certification, ASV scans, penetration testing or generally dissecting the PCI-DSS novels you don’t want to read yourself – drop us an email at pcidss@pkfmalaysia.com and I guarantee we’ll pick it up.

No customer is too small (or big) for us to handle!

Here are some useful links on this topic:

a) Good PDF from VISA

b) Official document from PCI

c) PCIPortal

Good luck on your PCI journey!

PCI DSS and the Problem of Scoping

pci-compliance

I recall in an actual case a few years back when I received a call from a company requesting us to do a certification for PCI for them. So I met them and drew out their PCI plan starting with a gap assessment, remediation and certification audit.

They said they have already done their own gap assessments internally by their ISMS guys. And they will be doing all their remediation on their own and they just needed me to quote for certification audit because “PCI is forcing us to be certified by a third party, which we believe we can do it better than you can”.

There was nothing much to talk to them about, but I did mention that if we find major NC (non compliances, in ISMS speak), we would then use that ‘certification audit’ as our own gap assessment and that we might be required to come back again to verify.

The company truly believed that PCI was a subset of ISMS and they handled it as such.

So we came in for the certification and found out that their entire scope was completely messed up. For instance, there was another out of scope network and systems connecting into their CDE for monitoring. Because card data wasn’t passing through, they marked it as out of scope. Unfortunately, PCI doesn’t see it that way. This would be considered an Non CDE In Scope, and systems within this network will need to be secured as well, and hardened as per PCI. The logic is that if these systems are compromised, there is a path into the CDE that can be exploited.

They made a huge fuss on this, claiming that they are willing to absorb the risk and that their management signs off on the risk assessment.

ISMS is a best practice/guideline at best – it’s a great marker for security, but PCI is a standard. If you can’t meet it, then you don’t meet it. Of course, there are ways around this particular issue, but they insisted we passed them simply because their management accepted the risk.

Here’s another idea: PCI-DSS generally doesn’t really care about your business. It’s not about you. It’s about card data. Visa/Mastercard and the Jedi PCI council are not concerned about your business – they are concerned about the confidentiality and integrity of card data. That’s why you will not find any BCM or DRP requirement in PCI. RTO and RPO? Pfft. They don’t care. Your business can go down for 10 weeks but as long as card data is safe, it’s good.

And that’s why, scoping is HUGELY important. Many people might think that a gap assessment is a waste of time. It is, if it’s done incorrectly. I recently witnessed a ‘gap assessment’ report that was a complete mess. It just detailed the PCI twelve requirements and in each requirement gave an overview of the company’s controls and what they should be doing: ripped off almost verbatim from the actual standard itself. That can be downloaded for free.

A gap assessment needs to bring you from one place to another and needs to provide these:

a) A clear understanding of your scope, including a writeup on your network, and processes that have been assessed. It should also be clear what is out of scope. This initial scope usually is not set in stone as remediation would sometimes change what is in scope and what is not in scope. But at least you have something concrete to start with.

b) If possible, an asset register. For PCI. If this is not possible (for many reasons, e.g they have not purchase some assets required for a control), then the asset inventory needs to be prioritised a quickly as possible to see what is scoped and not. Asset should be clear on: Public ips, internal devices, servers, network devices, people involved, desktops, databases etc.

c) Network in scope and out of scope. This is key as companies are required to identify segments scoped out, and do segmentation testing. Also, CDE is clearly marked, NON-CDE IN SCOPE (we call it NCIS) must also be identified. Systems in NCIS could be monitoring system, SIEM, AD etc. Any system that connects to the CDE, but does not store, transmit or process credit card data are considered NCIS. NCIS must be scoped for testing, quarterly scans, hardening and such.

d) Clear roadmap for remediation and recommendations to proceed, specific to the organisation. These ‘gaps’ should all have a corresponding solution(s).

If the gap assessment doesn’t give you any of these, then it’s pretty useless. If it doesn’t move you forward or provide you with the information to move forward, it’s not a gap assessment. It’s an expensive training session.

So back to the first example of a customer. It wasn’t possible for us to certify them no matter how they argued, because simply they were not compliant (there were also many issues that they did not comply, for instance storage of card data in text files and sending via emails).

As a lesson – don’t neglect the proper scoping. It’s hard work, but as I always say: Start wrongly, do wrongly, finish wrongly. And that’s 6 – 8 months down the drain, with thousands of ringgit gone in investing, and job on the line. The second example is pertinent also. There is always a chance to OVERSCOPE as there is to UNDERscope.

An overscoping example would be to purchase all sort of snazzy security systems worth thousands of ringgit only to find that these were not needed, or that current controls were sufficient. It’s nice to have – but most of our customers, no matter how big they are, always have a trigger on the budget and cost optimisation is the topmost in their priority.

If you want us to help you in your PCI-DSS scoping, drop us a note at avantedge@pkfmalaysia.com and we can get you started with the initial understanding straight away!

Deployment of Alienvault in Practice Part 2

avlogo

So now you have a server instance of Alienvault in your network and you need to get your sensors up and running.

While a majority of small deployment can do with an All-In-One, there are reasons why you might need a separate server/sensor config. Remote sites for instance; where you want the sensor located onsite to perform log normalization, vulnerability assessments, availability etc. The sensor does quite a fair bit of work as well – and on top of that, it balances out the EPS. Remember, the AIO has a limit on EPS, so if you are looking at anything beyond 1,000 EPS, you are going to struggle to keep up with the events without a sensor.

Deploying a sensor is straightforward.

First, it’s important to understand a sensor does not have a GUI frontend, so all config is done on the Alienvault Setup Menu or CLI. This doesn’t make it any more difficult – in fact the hardest part of it is to include in the License Key in the menu – since we can’t cut and paste, so you need to make sure you do it correctly.

Second, you should always have a server instance before you go around setting up the sensor.

In the Alienvault Setup, go to Configure Sensor->Configure Alienvault Server IP. Now this should be your server IP. Some have asked should it be the management IP or the Logging IP. It should be the management IP, unless of course your management IP is not reachable, in that case, the only reachable IP is the logging IP of your server.

So go ahead and do the same for your framework IP address as well. Apply all changes and you are set.

Head back to the server, and go to the UI

Configuration->Deployment->Sensors

You will see the following message

Warning: The following sensors are being reported by as enabled by the server, but aren’t configured

Don’t worry about this, just click on Insert and you are done. It’s that straightforward. You will see the sensor listed, with the context it’s under, version and the status should have a checkbox next to t.

The final part is to get the Logger up and running.

Opposite from the sensor, the Logger is setup via the UI.

What’s important to understand here is that the flow is Sensor -> Server -> Logger.

So the logger is actually the end of the flow where all your logs are forensically stored and archived and validated. As far the server is concerned, it sees the Logger as a Parent.

ON THE LOGGER

Head over to the Logger UI (having already set it up as you did the server initially with IP Addresses, Licenses etc)

Go to Configuration->Deployment-> Servers and use “Add Server”

Again go ahead and use the IP address you have been using to define your server during your sensor config.

Once you have added the server and saved, head back to the Server screen and click on your logger instance (which should be there by default already)

Now select “NO” for everything except “LOG” in the form.

That’s it. You shouldn’t be type in the REMOTE USER and all that as this is done later in the Server.

ON THE SERVER

Now, back to the Server UI. Go to the same Configuration->Deployment->Servers.

It sometimes can get confusing here as the UI is the same, so make sure you name your Logger and Server appropriately!

On the server, you should see both the SERVER and LOGGER under the UI.

Modify the LOGGER (remember, you are on the SERVER UI, NOT THE LOGGER UI).

You won’t be able to change anything in there but you can set the Remote Admin and password to log into the Logger. Use the admin credentials (not the root) and let the URL populate itself by clicking on it.

Set “Remote Logger”

Finally, go back to the server screen and click on the SERVER -> Modify

You can now opt to set up Log to NO. Under that, in the Forward Servers option, click Add Server and go ahead add in your Logger.

Save and Apply all changes.

Click on Server Hierarchy and we have a nice primitive depiction of the Server pointing to the Logger. Well Done!

Now –  a note: If you are using an AIO UA as a server instance, you can set up the Log to YES in the AIO. That means you are logging in both locations.

In your logger, interface you will see that you have two different color boxes, depicting which Logger it is sent to.

If for some reason you want to say, OK, for asset 1 – 20 send to AIO, and for Asset 21 – 100, send to the Logger, you can disable the forwarding we set up above, and do it via policies. The great thing about Alienvault is that it allows that granular flexibility to control where your AIO wants to forward (or not forward) logs to.

We will explore Policy Setup in the future.

For now, enjoy your three piece band – Sensor, Server and Logger!

 

PCI-DSS v3.2 is officially published

pci-compliance

After some back and forth on the draft versions, PCI v3.2 is now officially published. You can go ahead and download it here, and click on the nice little link saying 3.2 and agree to all sorts of terms and agreements nobody ever reads about.

Anyway, a little bit of background on this release. Usually, versions for PCI are released in the later stages of the year in November. In fact, even I mentioned this to a few clients that version updates were done in November, until PCI recently announced that v3.2 is to be released in March/April timeline due to a few factors as described in this article. So yeah, now I need to admit I was bamboozled. PA-DSS v3.2 is likewise to be released sometime in May or June.

So here’s how it works: 3.2 is now officially effective. PCI v3.1 will be retired end of October 2016 (basically to allow everyone to sort of complete the v3.1 if you are already in the final stage of completing it). So all assessments/audit that occurs AFTER October will be version 3.2. This is important to note, because if any gap assessments begin now, and has a timeline to complete AFTER October, you want to use 3.2. For ongoing projects, it is best we scurry and get it all done before October! Chop chop!

There is a bunch of ‘best practices’ that will become requirements by February 2018. Other dates you need to be aware of:

a) June 30, 2016 – for companies not migrated yet out of SSL/early TLS, you will need to have a secure service offering (meaning an alternative service utilising TLS.1.1 and above. I will go out on a limb here and suggest to use TLS1.2 knowing how volatile PCI guys are in changing stuff).

b) June 30, 2018 – SSL/early TLS becomes extinct as far as PCI is concerned. No more mitigation plan! The exception is on POS terminals that has no known exploits.

c) January 31, 2018 – This is the deadline where new requirements graduate from being ‘best practices’ to ‘mandatory requirements’.

OK, now that’s out of the way, here’s a snapshot on the main stuff of v3.2 and what we are facing:

a) New Appendix A3 covers the Designated Entities Supplemental Validation. This basically means that if any acquirer or VISA/Master deems that a service provider needs to go through ADDITIONAL requirements on top of the torture they have endured for PCI, they can. These victims could include companies making ridiculous amount of transactions, aggregators or companies that are constantly breached. So PCI has a whole bunch of extra stuff for you to do, mainly to deal with BAU activities, incident response, documentation and logical access controls.

b) Additional cryptographic documentation – Service providers are not going to enjoy this. We will now need to formally document the protocols, key strength, cryptoperiod, key usage for each key and HSM inventory. This should technically be done anyway in your key management procedure document, but now its a requirement. Take a look at NIST SP800-57 for the key concepts to get you started.

c) 8.3 is significant : Multifactor login. Whereby previous versions stated that 2 factor authentication is required for remote access from non-secure networks, now 3.2 shifted this requirement to “all personnel with non-console administrative access, and all personnel with remote access to the CDE”. Wait, what? This means, even if you are accessing an administrative UI or page (non-console) from a secure environment, multi-factor (2 factor is good enough) is required! I think there would be some pushback on this as this requires a fair bit of effort. We have until February 2018 to implement this.

d) Another big one is 11.3.4.1 – segmentation PT now needs to be done every SIX months as opposed to a year. This is not good news for some clients who have segments popping up like acne on a pubescent face. That’s quite a lot of work for them to do and this might give them more cause to think of a completely isolated network just for PCI-DSS with its own link and architecture, as opposed to sharing with multiple not in scope segments. Again, we have some grace period till Feb 2018.

e) New requirement 12.11 is interesting. I have always been an advocate to do constant checks with clients to make sure they are at least practicing PCI. We have this free healthcheck service every quarter for clients who take up our other services and we are checking exactly this: daily log reviews, firewall is clean, new systems are documented and hardened, incidents are responded, proper approval for changes etc. It’s nice to see that our efforts now have something formal tied to them. Feb 2018 is the deadline.

f) Here’s a downer. Appendix A2. We all know there was some sort of escape loop for those who were caught with SSL and early TLS in their applications. They created mitigation documents which may or may not be true. Just saying. Now, if you take this route, this is no longer a free pass for your ASV scans or vulnerability scans. If you have these protocols in place, your mitigation plan must fully address A2.2 requirements. If you are a service provider, take note of A2.3: YOU MUST have a secure service option in place by June 30, 2016! Not 2018. 2018 is when you stop using SSL/early TLS. So this timeline is slightly confusing. Like X-Men:Days of Future Past confusing.

Some main clarifications include:

a) Secure code training now officially needs to be done annually – you won’t want to guess how much push back I get on this when I tell clients it’s annual, and not something that is done when they have the budget for it (which is never).

b) Removing the need to interview developers to ‘demonstrate’ their knowledge – I do programming a bit, but I’ll be foolish to think I can go up against a senior developer who eats, breathes and … lives for coding. How awkward I’ve seen some younger QSAs struggle to do this (determining whether the senior dev guru is good enough), when its obviously not something they even know about. Let auditors audit and let developers code.

c) Finally, note added to Req 8 to say that authentication requirements are not required for cardholder accounts, but only to administrative or operational/support/third party accounts. We have always practiced this anyway but now its clear.

d) More clarifications on addressing vulnerabilities considered ‘high’ or ‘critical’. I am not a big fan of these. I think every vulnerability should eventually be addressed, just prioritised in terms of timing. Even if it’s low or medium, it’s still important to have a mitigating factor to it. There is a reason why it’s a vulnerability and not something you can sweep under the carpet.

e) A good note on pentesting in 11.3.4c – testing now needs to be done by qualified internal or external resource with independence. Again, we already practice this but it’s good that now it’s official.

So, that’s about it. Of course, there’s a fair bit more. I suggest you to poke through the summary of changes first and then go through the documentation itself.

Be aware of those dates! It’s all over the place (June 2016, June 2018, Jan 2018), and who knows these might change in the future. Have a happy compliance.

 

 

 

Deployment of Alienvault in Practice Part 1

avlogo

In this article, we are going to explore deploying Alienvault in practice. While there are many documents out there that give pretty clear steps on what to do, these documents are somewhat pretty distributed, and we don’t want to come to a point where we are 85% into the deployment, only to find that we were supposed to do something 25% in and did not do it.

Before anything else, you should have a deployment checklist to make sure everything is in order. The checklist is pretty long, much too detailed to put into a post like this Email us at alienvault@pkfmalaysia.com, and we can get you started.

In this example, we will be using a 3 piece band: the Server, the sensor and the logger. You can generally just trade the server for an AIO, which we did, but in general, it’s going to serve as a server. Remember though, with an AIO, you do have an additional sensor if you want to enable it, or a logger as well, with around 4 TB of compressed space (vs 9TB of compressed space for a standalone logger).

With that out of the way, and assuming that physically everything is racked and connected, and the VMs are up and running, you are ready to go. Remember, if you have separate systems, always start with the server (or the AIO) first, and then only move on to the sensor. Else, your sensor might be orphaned.

Now, of course, if you are using virtual appliance, your VMWare needs to be set up. Some questions we encountered is, how many interfaces we should have. Well, you should have the management interface (and use that as log collection), and the other interfaces would be for monitoring. Now one of the trick questions here is that, hey, I want to have a separate management interface and log collection interface. So that you know, nobody knows my management interface.

Possible. But we have seen deployments where both the management interface and log collection interface sits on the same subnet. This is probably going to cause some issues – one of it is routing might likely be screwed up. Another thing is that deployment of HIDS might constantly refer back to the management interface. So, rule of the thumb:

If you only have one subnet, just use the one interface for management and log collection.

Another question we have is, by default, AIO comes with six interfaces. (because, remember, it’s also a sensor!). Some clients have it in their minds to use all six interfaces. Generally, aside from the management and log, all the other interfaces won’t be assigned an IP and will be monitoring interfaces (i.e put it in a SPAN port and monitor away). Now unless you have very specific reasons to, it would not be so likely to use all monitoring interfaces (depending on how you set it up), so don’t feel like you are losing out. A lot of the setups we see simply has the sensor or AIO located at a central switch with SPAN or TAP and monitors fine.

Another question: Thin or thick provisioning for disk format. Well – we are used to just setting it as thin, meaning that it will just grow as the logs increase, but if you have space, setting it to thick might still be fine. I am not a VMWare guru, and I am sure the VMWare gurus out there will go into battle with this one, but we’ve deployed on both disk format and it doesn’t seem to have an extreme impact at all. Of course, I stand to be corrected.

Yet another question (even before we go into deployment!) is if I buy a hardware with a hard drive of 200TB, can Alienvault use all the 200TB instead of the measly 1TB for AIO and 1.8TB for Logger? The short of the answer is no, the size of the virtual machine is in the OVF itself, so if you purchase a ridiculous amount of hard drive space, the alienvault image is still going to occupy what it is going to occupy. But hey, you could start hosting other virtual systems there of course and use them up!

Setting up the server

1) Ok, finally, let’s get down to it. Once you boot up and assuming you have installed the OVF correctly if you are running virtual appliance, you will be dropped into the setup menu. Select Manual network interface and define an IP. I would suggest this as opposed to depending on a DHCP server. Aside from that, other setup paramaters are what you should expect and should be able to fill up pretty easily.

Now one of the annoying things that sometimes we face is that when the initial setup is rebooted, we get stuck at that Alienvault face that keeps loading but nothing happens. To be safe, when you reboot, just keep pressing ESC till you see the booting details. If you are still stuck, alt+F2 might be able to escape you. Else, you might need to give it the good old Vulcan Nerve Pinch. (Ctrl-Alt-Del).

Other times, you might just be stuck at VMWare console and the annoying “Waiting for connection” that seems to hang. Your system is fine, it’s just the VMWare console is moody. Restarting your Vsphere might do the trick.

Once you can SSH into your box you are confronted with a login screen and once logged in, you need to change the root password. Don’t forget it!

After that, register your appliance. Now, if you are running on AIO/server/logger, I would suggest to do an online Web UI registration. Obviously you will need connectivity to the internet. You can copy and paste your product license key once you access the Web UI as there will be an option for you in the Free Trial Screen. After that, you can set up the admin user and password. There is an offline technique as well, or if you are in the mood to type the entire license, you can do so from the alienvault menu itself.

After this is done, set up the hostname. You need to do this from the alienvault setup menu, select System Preferences -> Configure Hostname.

Make sure you apply all changes. Once you apply all changes, go ahead and reboot the appliance from the menu itself.

Another important thing is to change the time zone. After reboot, head over to

System Preferences -> Change Location -> Date and Time -> Configure Time Zone. Select the place you are at and apply all changes.

Likewise, you might want to use an NTP (network time protocol) server as well. In the same Data and Time menu, select Configure NTP Server. Enable it by selecting it and put in the NTP hostname (if you have DNS defined) or IP. Apply everything.

Now, this might be a good time to check on the linux box if your time is correct.

Jail Break your system, and type in ‘date’, you should see it changed.

Likewise go to WebUI, login and click on Settings at the top right. Make sure the time zone for that user is properly defined. Now check back on the SIEM (Analysis -> SIEM) on the WebUI , you should see the Date as whatever timezone you have defined yourself in.

Timestamping is obviously a big deal in any SIEM, and other than these areas to be wary off, we should also know that individual plugins also have timezone options. This is helpful if the data source suddenly changes timezones and we have to accomodate the data source.

It looks like the server is all set. If you have an AIO, you should also now see under

Configuration -> Deployment -> Sensors / Servers , your IP address because you are a Sensor and a Server.

Next, we will look at setting up the sensor and logger.

 

« Older posts Newer posts »

© 2025 PKF AvantEdge

Up ↑