Creating Custom Address Lists with Exchange Online

Holy cow it has been a while since I made a post on here!  Things have been crazy.  Anywho, let’s talk address lists.  So typically when users ask for “folders” in the global address list (GAL), even with Exchange Online, the first thing my brain goes to is public folders (PF).  Now I got thinking, that’s kind of the old school way of doing things, and a particular customer of mine was really not into that idea.  Therefore I did some research came up with a solution using custom address lists!

So custom address lists are just like any of the other predefined address lists that you are used to seeing like the GAL, All Contacts, etc.  They show up as their own separate address list under All address lists, or under the Directory option if you’re using Outlook web app (OWA).  The cool thing is you can set filters on what show up in these address lists based on what’s in your GAL.  For example, if you want just contacts from the state of Arizona to show up in a custom address list you name Arizona Contacts you can totally make that happen.  Better yet, it’ll maintain itself!!!  Follow this link for more information, we’re going to move on to the good stuff.  Since there is no UI option to do this, it must be done using PowerShell as of the time of writing this.

Creating a Custom Address List in Exchange Online

To begin, you must login to Exchange Online and ensure that you have the admin role Address Lists assigned to you.  This admin role is not assigned by default, so you may want to make your own role group and with just the Address Lists role assigned to it and add your members.

Once you have the permissions all sorted out (they can take a while to apply, heads up), start by logging into Exchange Online with PowerShell.

Now that you’re connected, creating a custom address list is a simple one liner.  We’ll run with the contacts from Arizona example I mentioned in the intro.  See the reference material links at the end of this post for more information on how you can filter.

You would think you were done right?  WRONG!  Custom address lists have what I would consider a bug, and Microsoft considers acceptable behavior.  YOu have to follow the steps in this article for the address list to start functioning correctly.

From what I can make of this, there is some sort of modified attribute for each object on the backend that we don’t see and the address lists need that to be more recent than the creation date of the address list.  That is just a guess, don’t hold me to it!

With that you are all done.  Moving forward as you add contacts from Arizona, they will automatically be added to your address list.  Pretty cool!

 

Reference Material:

Manage Address ListsNew-AddressListSet-AddressListFilterable Properties, Get-Recipient

Add SSO Support for Chrome Browser with ADFS 3

By default, ADFS 3 (Windows Server 2012R2) only supports the seamless Single Sign-on (SSO) that we all expect with Internet Explorer browsers.  Chrome can be enabled though by following these steps:

1.  Login to your on-premises ADFS server and launch PowerShell as administrator.

2.  Run the following command to see the current set of supported browsers:

If you have the default configuration, it will return the following:

3.  Run the following command to add Chrome support to the list:

4.  Confirm your change running the same get command from step 2.  You should have the following output:

5.  Restart the ADFS service to apply changes:

All done!

Install & Configure Elastic Curator for Index Management

After you have set up your ELK Stack and have been using it for a while (see Step-By-Step Install ELK Stack), a question should start creeping into your head; How do I stop my Elasticsearch Indexes from growing endlessly?  Now you could occasionally go into Kibana and delete the indexes via the GUI found there, but we’re sysadmins!  We automate things!  Luckily Elastic has provided a utility for managing indexes named Curator, which is easily ran as a cron job.  Win!  Be sure to visit the Elastic Curator page and get an idea of what you can do with it, and roughly how it is configured.  We are going to configure it to delete all indexes beginning with winlogbeat- and filebeat- that are older than 90 days in this example, so let’s get to setting that up.  I will be showing you the most recent version as of writing this, Curator version 5.5.4.

Installing & Configuring Curator

1.  Start by downloading the DEB package which is hidden on the APT install page.  I usually place these in /tmp for easy cleanup.

2.  Once you have that downloaded, let’s install it.

3.  Curator will now be installed at /opt/elasticsearch-curator, but we don’t want to mess with anything in there.  I created a hidden directory in my home directory named curator as the documentation suggests as default, and then created a configuration file in that directory for Curator.

I placed the following configuration in the curator.yml file.

This is pretty straightforward.  It tells Curator to connect to Elasticsearch found on localhost (127.0.0.1) on port 9200 without SSL and send basic log info to a file at /home/username/.curator/log.log.

4.  Now we need to tell Curator what to do now that it knows what to connect to and what not.  We do this with an action file, which I creatively named action.yml.

I placed the following configuration in the action.yml based off of the example found here in the documentation.

5.  Now we can test it with the –dry-run argument.  This will log what Curator WOULD do if it were not being run with the –dry-run argument, it does not actually perform the actions.

Check the log to see all indexes older than 90 days would have been deleted.

6.  Assuming that all checks out, we just have to make a cron job to run this thing for us.  I like to run it daily, but it’s dealers choice.  Start by making your script in the appropriate directory.

And added the following:

7.  Now we just need to add the actual cronjob.

Append the following to the bottom of the file:

This configures the cronjob to run the curator.sh script that we made in step 6 to run every day at 6am as root.  Curator is now set up to manage your indexes!

Schannel Event 36870 – A fatal error occurred – RDP

Came in one morning to reports that nobody can access a particular Windows Server 2012R2 RDS server.  To keep from being too wordy, I took some time and narrowed it down to just an issue with that one particular server, not RDS itself.  As I kept digging I came across numerous instances of the following Schannel Event 36870 error on the effected RDS host, which I could then reproduce by attempting to make an RDP connection to the server experiencing the issue.

Now this led me down quite a number of SSL certificate rabbit holes, but the winner came from this stackoverflow article, which referenced this Microsoft blog post, in which scenario 2 was my solution.  I restored default permissions to C:\ProgramData\Microsoft\Crypto\RSA\MachineKeys, restarted the box, and wha-lah!  RDP was functional again!

Creating Static Routes with Netplan on Ubuntu 18.04

Let me just start by saying that if there is one thing that has drove me crazy with Ubuntu Server 18.04, it is the fact that they went and changed how we configure network.  It’s been the same for so long, it worked well, and now I’m just sounding like a user so I’m going to cut myself off there!  Anywhow, Ubuntu 18.04 has moved to Netplan for their network configuration.  Now I used to use post-up pretty heavily in prior versions of Ubuntu, but hook scripts are no longer supported.  Thus I must use the configuration below to configure static routes.  I do not say this with disdain though, it really is pretty simple.

 

Step-By-Step Install ELK Stack on Ubuntu 18.04

Elasticsearch, Logstash, and Kibana (aka ELK Stack) are very powerful tools for storing, analyzing, and visualizing log data in a centralized location.  That being said, it can be quite the headache to actually get up and running if it is your first experience with it.  Having spent the time pouring over the documentation provided by Elastic, which I must say is quite impressive, and struggling through getting the ELK stack up and running I figured I would make a step-by-step.  Some things to note before we get started:

  • This will be for the current most recent version 6.3.2 released on July 24, 2018.
  • I will set up an nginx reverse proxy to access Kibana.
  • will not be including SSL setup.
  • I will be installing all components of the stack with DEB packages.
  • Java 8 is required prior to ELK setup.  See my post Install Java JDK/JRE on Ubuntu Server without APT.
    • Java 10 is not compatible with the Logstash 6.3.2.  I learned this the hard way, take my word for it.
  • I will be showing BEATS configurations in a separate post.  This will be only the ELK stack setup.

I am not using APT repositories for anything because I have been burned by the upgrade process in the past with ELK, so I just manually upgrade as necessary.  Now, let’s get this thing started.

Install & Configure Elasticsearch

1.  Start by navigating to the Elastic downloads page and select Elasticsearch.

2.  Login to your Ubuntu box and download the DEB package.  I put it in tmp for easy cleanup.

3.  Now download the checksum and compare against your downloaded package.  It should return OK.

4.  Install Elasticsearch.

5.  Open the Elasticsearch config file found at /etc/elasticsearch/elasticsearch.yml and uncomment/edit the following settings.

This will configure your cluster name as my-cluster-name, your node (or server) as my-node-name, your data storage location as /var/lib/elasticsearch, your log location as /var/log/elasticsearch, and your host as localhost (127.0.0.1).  These are pretty default settings and I don’t see many reasons to change them, but do so if you wish.

6.  Restart/Reload the service, daemon, and enable the service.

7.  Test Elasticsearch.

Which should return:

 

Install & Configure Kibana

1.  Download Kibana DEB package from Elastic downloads page.

2.  Install Kibana.

3.  Open the Kibana config file found at /etc/kibana/kibana.yml and uncomment the following:

4.  Restart/Reload daemon, enable and start the service.

 

Install & Configure Nginx (Source)

1.  Install nginx.

2.  Setup user for basic authentication.

Enter a password for the user when prompted.

3.  Configure nginx by clearing /etc/nginx/sites-available/default and inputting the following:

This will configure nginx as a reverse-proxy for Kibana, while also requiring the username and password set up in step two.

4.  Test nginx configuration and restart service.

 

Install & Configure Logstash

1.  Download the Logstash DEB package from the Elastic downloads page.

2.  Install Logstash.

3.  Create the file /etc/logstash/conf.d/10-beats.conf and input the following:

This will configure logstash to listen for beats applications on port 5044 without requiring SSL.

4.  Create the file /etc/logstash/conf.d/50-output.conf and input the following:

This will configure logstash to output beats data to elasticsearch on this host to index which named is determined by specified variables.  In this case, the beats application name – date.  Ex. winlogbeat-2018.08.23.

5.  Test your Logstash configuration.

6.  Restart and enable Logstash service.

 

At this point you should now have a functional ELK server that will accept input from BEATS!

 

 

 

Install Java JDK/JRE on Ubuntu Server without APT

As I’m sure many of you are well aware sometimes you need to pick and choose when to upgrade an application, particularly when you need to wait for application compatibility.  This being the case, you want to be able to update your OS for vulnerabilities without breaking the applications that run on it that may not support the newest version of certain dependencies (such as Java).  More work for you, but less work in the log run.  Anywho, below is a quick step-by-step for installing Java JDK/JRE without using an APT repository:

1.  Make your installation directory

2.  Navigate to the Java Downloads Page and download the needed version (jdk-VERSION-linux-x64.tar.gz)

NOTE: I had trouble with wget not downloading the file correctly.  I had to download on a Windows machine and use WinSCP to copy over the file.

3.  Move (or copy) the downloaded file to your installation directory if not already there

4.  Unpack the tarball

5.  Delete the tarball (optional)

6.  Configure Java with the following commands

7.  Create Java Environment Variables

Copy the following into the file and save, adjusting as necessary

8.  Then run the following to apply

 

Done!

Run the following to check the installed running version

Run the following to return Java variable locations

 

ESXCLI – VMFS Storage Reclamation with UNMAP Command

Typically when you provision your datastores for VMware, you will thin provision them.  So let’s say you bring three VMFS5 datastores up to near capacity, but then you free up 30% on each datastore.  Do you get that space back on your SAN automatically?  If you are using VMFS6 datastores and have configured Automatic Space Reclamation then yes you would.  On the other hand if you are running VMFS5 datastores, you will need to perform this process manually.  That’s where the ESXCLI UNMAP command comes into play.  I am going to copy/paste some general information about this command from its VMware Knowledge Base article that is just good to know.  Also, use the same KB article as a reference.  Note that when it references “new”, it means new to vSphere 5.5.

New command in the esxcli namespace that allows deleted blocks to be reclaimed on thin provisioned LUNs that support the VAAI UNMAP primitive.

The command can be run without any maintenance window, and the reclaim mechanism has been enhanced as such:

  • Reclaim size can be specified in blocks instead of a percentage value to make it more intuitive to calculate.
  • Dead space is reclaimed in increments instead of all at once to avoid possible performance issues.

With the introduction of 62 TB VMDKs, UNMAP can now handle much larger dead space areas. However, UNMAP operations are still manual. This means Storage vMotion or Snapshot Consolidation tasks on VMFS do not automatically reclaim space on the array LUN.

Now let’s get into actually doing this thing.  Note that I am writing this under the assumption that you have the datastore you are working with mapped on all of your ESX hosts.  If you have a particular datastore mapped to a particular host, you will need to perform these steps on said host.

1.  Start by opening an SSH session to one of your ESX hosts

2.  Run the following command to get information about all of the mapped datastores for this host

3.  Now you have a choice.  You can run the unmap command with either the UUID or the Volume Label.  I personally use the Volume Label because it’s easier, let’s be real.  Record your volume label and run the following command, where Datastore00 is your volume label:

If you would like to use the UUID, run the following where 509a9f1f-4ffb6678-f1db-001ec9ab780e is your datastores UUID:

 

That’s it!  It’s really that simple.  It will take some time, but you should be able to refresh your SAN periodically to see the available disk space going up.  If you would like to watch the log as it completes the unmap operations, do the following:

1.  Open a second SSH session to the same host you are running the unmap command on

2.  Run the following to change directories (not required, but makes life easier):

3.  Run the following to tail the log:

The unmap operations will be clearly indicated with UNMAP in the log line.

High CPU Usage from microsoft.online.reporting.monitoringagent.startup

After installing the the Windows Updates for July 2018, my Azure AD Connect servers started maxing out their CPU.  Upon looking into it I came across the “microsoft.online.reporting.monitoringagent.startup” process, otherwise known as the “Azure AD Connect Health Sync Monitor” service, eating up nearly all of the machines CPU.

I came across this Microsoft support article that acknowledged the issue and places the blame on the June 2018 .NET updates.  Not 100% true for my situation as I never had issues with the June 2018 updates just the July 2018 updates, but I’ll take it.  It may also be worth noting that I was running version 1.1.654.0 when experiencing this issue.

The solution is to simply update to the most recent version of Azure AD Connect, which in turn will update the problematic Azure AD Connect Health Agent.  This is a pretty simple process but I’ll detail it below quick.

1. Download the most recent Azure AD Connect installer

2. Run the MSI on the machine you have Azure AD Connect installed

 

3. Select the Upgrade option on the Upgrade Azure Active Directory Connect window

 

4. You will see a “Upgrading the synchronization engine” status.  This can take some time.

5. Enter your GLOBAL ADMINISTRATOR Azure AD credentials as prompted

 

6. Input your DOMAIN ADMINISTRATOR AD DS credentials as prompted

 

7. On the Ready to configure window select Upgrade

 

8. After some time you will see the following success message.  By default it will start a full sync after the upgrade unless you cleared the check box in the previous step.

QuickBooks Periodically Asking for Windows Admin on RDS Server

I come across this issue from time to time with new QuickBooks installations on RDS (terminal) servers.  The fix is really quite simple, though for some reason I always seem to forget about it when deploying.  Anywho, the symptom is every once in a while the QuickBooks application will require a Windows administrator to logon and launch the application.  Users will see the below prompt:

To fix this:

1. Logon to the server that is hosting the QuickBooks database service

2. Launch services.msc and navigate to the QuickBooksDBxy service (xy changes depending on version)

3. Open the properties of the service and select the “Log On” tab

4.  Change the “Log on as:” to “Local System account” with the “Allow service to interact with desktop” option checked

5. Click OK and restart the service

 

Source