Monday, December 04, 2023

Windows error on MMC when trying Certificate Snap-in

 So when you try to run certificate snap-in from MMC it keeps dying on you? it just disappears or you get a popup and the popup says some faulty memory issue and such.

You check app event and find Event ID 1000, with source "Application Error"?

The application error has one of these exception code?

Exception code: 0xc000041d

Exception code: 0xc015000f

Exception code: 0xc0000374

Exception code: 0xc015000f


0. You have RSAT installed on your computer!!

1. Run "sfc /scannow" and confirm that "00000224 Hashes for file member [l:13]'srmclient.dll' do not match." is logged on the cbs.log. (note: if you keep run the sfc /scannow, it will keep getting the same error)

2. Go to "", copy and run the script in PS admin mode (x64).

3. Run "sfc /scannow" twice and confirm that there is no more integration violation. (note, the first sfc scan will fix the issue and second one will let you verify that it is fixed)

4. You may have to reboot the system.

5. Try the cert snap in.

Friday, September 16, 2022

SFC /scannow vs Dism /Online /Cleanup-Image /ScanHealth and the "The component store is repairable."

 So I was curious to find out what the difference between sfc and dism scan health and further more what the "The component store is repairable." means. I could not find a clear answer so I did some experiment.

Long story short "The component store is repairable." means there is a system integrity problem.

sfc/ scannow and dism /ScanHealth are not the same, better to run them both.

Here is what I did, I had a windows server that spend more than 1 hour to reboot after monthly patch got applied, so I did SFC scan and DISM scan. DISM scan showed  "The component store is repairable." and "The operation completed successfully."  <-- this is very misleading. It should be more clearly written like: "The operation completed with error: component store need repairs"  I almost ignored it.

Anyhow I ran both dism scan and sfc scan both found problems. After I fixed the issue now the system takes less time to reboot after a patch got applied.

C:\Windows\system32>sfc /scannow

Beginning system scan.  This process will take some time.

Beginning verification phase of system scan.

Verification 100% complete.

Windows Resource Protection found corrupt files and successfully repaired

them. Details are included in the CBS.Log windir\Logs\CBS\CBS.log. For

example C:\Windows\Logs\CBS\CBS.log. Note that logging is currently not

supported in offline servicing scenarios.

C:\Users\Administrator>Dism /Online /Cleanup-Image /ScanHealth

Deployment Image Servicing and Management tool

Version: 10.0.14393.4169

Image Version: 10.0.14393.4169

[==========================100.0%==========================] The component store is repairable.

The operation completed successfully.


C:\Users\Administrator>Dism /Online /Cleanup-Image /RestoreHealth /Source:WIM:D:\sources\install.wim:2 /LimitAccess

Deployment Image Servicing and Management tool

Version: 10.0.14393.4169

Image Version: 10.0.14393.4169

[==========================100.0%==========================] The restore operation completed successfully.

The operation completed successfully.

C:\Windows\system32>Dism /Online /Cleanup-Image /ScanHealth

Deployment Image Servicing and Management tool

Version: 10.0.14393.4169

Image Version: 10.0.14393.4169

[==========================100.0%==========================] No component store corruption detected.

The operation completed successfully.

Wednesday, August 03, 2022

How to Linux, MariaDB, pam, and SSSD

 So I tired to make MariaDB to work with SSSD to I can use AD user account to access the MariaDB and I found many many web sites with questions not answers.

I spent 3 days chasing down this problem and found the easiest solution.

So here we go,

I used a Oracle Linux 8, and install SSSD and MariaDB, and setup SSSD and PAM to work with it.

Your Linux server name should be <servername>.<your domain name> not <servername>.localhost


I assume you already installed the MariaDB.

Install SSSD (Required)

#dnf install sssd realmd oddjob oddjob-mkhomedir adcli samba-common samba-common-tools krb5-workstation openldap-clients policycoreutils-python-utils -y

#realm discover <yourdomain name>

eg: #realm discover

# realm join --user=<your domain user account name> <domain name>

eg: realm join --user= john

Edit /etc/sssd/sssd.conf (optional)



use_fully_qualified_names = True

fallback_homedir = /home/%u@%d


use_fully_qualified_names = False

fallback_homedir = /home/%u

restart sssd

# systemctl restart SSSD

Install chronyd (required to match time with AD, change settings if you have local npt service)

#dnf -y install chrony

Check chronyd status

#systemctl restart chronyd

#chronyc sourcestats

Check status of SSSD

# systemctl status sssd

If you see any error fix it before you move on.

Install PAM for MariaDB (required)

Login to MariaDB as root and install PAM plugin

> mysql -u root -p 

> Install soname 'auth_pam'

> > show plugins soname like '%pam%';


| Name | Status | Type           | Library     | License |


| pam  | ACTIVE | AUTHENTICATION | | GPL     |


1 row in set (0.001 sec)

> exit

Edit SSSD.conf (required)

# vi /etc/sssd/sssd.conf

This is my setting, change yours accordingly.



domains = #<- change this

config_file_version = 2

services = nss, pam

debug_level=9 #<-- comment out once SSSD with mariaDB works


ad_domain = #<- change this

krb5_realm = #<- change this

realmd_tags = manages-system joined-with-adcli

cache_credentials = True

id_provider = ad

krb5_store_password_if_offline = True

default_shell = /bin/bash

ldap_id_mapping = True

use_fully_qualified_names = False

fallback_homedir = /home/%u

#access_provider = ad #<- we will be using simple access provider not AD, comment it out.

access_provider=simple #<- add this

#access_provider=permit #<- this is for debugging purpose, google it if you are curious.

simple_allow_groups=mariadbusers  # <- this is an AD security group, we will create later, name it to something you like.


debug_level=9 #<-- comment out once SSSD with mariaDB works


debug_level=9 #<-- comment out once SSSD with mariaDB works


debug_level=9 #<-- comment out once SSSD with mariaDB works


Restart SSSD

# systemctl restart sssd

Create mariadb and mysql files, it is called "PAM service name" or "authentication_string" by mariaDB

# cd /etc/pam.d

# vi mariadb

and add two line below

auth    required domains=<your domain name>

account required domains=<your domain name>


auth    required

account required

Copy the mariadb file.

# cp mariadb mysql

Ok you are almost there

Login to mariaDB and create new users.

> login -u root -p

There are 2 ways to create a user.

>  create user '<username>'@'%' identified via pam using 'mariadb';


>  create user '<username>'@'%' identified via pam;

The <username> is a domain user name.

eg: if you have a AD user domain\john

>  create user 'john'@'%' identified via pam;


>  create user 'john'@'%' identified via pam using 'mariadb';

Commit the change

> flush privileges;

Query OK, 0 rows affected (0.002 sec)

Note: % denotes allow from all IPs, change it accordingly to meet your security requirements.

If you add using 'mariadb', it will use the auth string /etc/pam.d/mariadb to initiate the auth, the auth file will tell the mariDB to use

If you do not add using 'mariadb', it will use the auth string /etc/pam.d/mysql to initiate the auth, the auth file will tell the mariDB to use



This is from the article:

You can also specify a PAM service name for MariaDB to use by providing it with the USING clause. For example:

CREATE USER username@hostname IDENTIFIED VIA pam USING 'mariadb';

This line creates a user that needs to be authenticated via the pam authentication plugin using the PAM service name mariadb. As mentioned in a previous section, this service's configuration file will typically be present in /etc/pam.d/mariadb.

If no service name is specified, then the plugin will use mysql as the default PAM service name.


Verify user creation

In this example I created john with mariadb auth string and sam without auth string. sam will use mysql auth string (aka PAM service name).

MariaDB [(none)]> select host,user,plugin,authentication_string from mysql.user;


| host                      | user  | plugin | authentication_string |


| localhost              | root  |           |                                  |

| %                         | john | pam    | mariadb                   |

| %                         | sam | pam    |                                 |


7 rows in set (0.001 sec)

We have both /etc/pam.d/mariadb and /etc/pam.d/mysql, just incase ! OK?

Now create a AD security group "mariadbusers".

You remember "simple_allow_groups=mariadbusers  # <- this is an AD security group, name it as you like." on sssd.conf?

Yes, use the same name, and add users on it.

In this example we should add "domain\john" and "domain\sam"

You must add all AD users who need to login to the server plus mariadb users. It will control server access as well. If your name is not member of this group you won't be able to putty to the server.

Linux Local users such as root is not subject to this group, we can't add linux users to Ad group any ways right? lol

How this work?

When try to login to the server, SSSD will auth and allow john to access the server if john is member of the security group.

Once john logged into the server and try to login to mariaDB, it will use auth string '/etc/pam.d/mariadb' to auth and check if the user is allowed to login using SSSD.conf's access_provider, in this case the Ad group "mariddbusers'

So what happens when you created user without auth string and you don't have /etc/pam.d/mysql? but the user is member of the security group "mariadbusers"?

The user will be able to login to the linux server but not able to login to mariadb, you will get "permission denied" error.

Have fun!

Friday, June 03, 2022

How to mitigate CVE-2022-30190, known as "Follina" also known as MS-MSDT vulnerability with SCCM.

Step1, From SCCM, create a new script under software library.

Here is a script body.

reg delete HKEY_CLASSES_ROOT\ms-msdt /f

Step 2, use CMpivot to create a new collection.

Now select a computer collection, perhaps start with a small collection, right click and start CMPivot.

On query window type script below and run.

Registry('HKLM:\SOFTWARE\Classes\ms-msdt') | where Property == 'EditFlags'

You might ask why we are checking KHLM not HKCR. Here is a reason:

HKEY_CLASSES_ROOT is not a real physical hive (it stores no data), it is just a merged view of HKEY_CURRENT_USER\Software\Classes and HKEY_LOCAL_MACHINE\Software\Classes. Updates to the underlying keys are instantly visible in HKEY_CLASSES_ROOT. (source:

In fact, you can't query HKCR using CMpivot, it will return nothing.

Once you got list of systems, on the top right corner of the CMPivot, click "Create collection"

Give it a good name and create the collection.

Right click the collection you created and run the script you created from the Step 1.

Keep doing this against all systems.


Wednesday, June 19, 2019

Commvault DASH copy nightmare, and how to improve it.

So I like to write about my experience with Commvault DASH (aka Aux) copy.

DASH copy is used when you want to ship local backups to a remote site for off site backup / DR backup.

I have two Media Agents, MediaAgent is actually a disk storage, one in our HQ and the other one is at our DR site, two MAs are connected with 500mb pipe.

This is about my experience with 4TB SQL database.

Commvault uses a database call dedup database, aka DDB, to keep track of changes, it makes a full backup once and backup only changes (deltas), on DDB they store/record so called "signature", using the signature it will only backup what is changed. they keep DDB on both Local and DR MA, It reduces backup time and storage space. Great!

For DASH copy, there are two options, disk optimized and network optimized, disk optimized is the default option. There is a sub option called "source side cache".

According to their tech document, disk optimized dash copy ships only the deltas/changes, it will use signatures to determine what it already have. Network optimized will re-open the local backup create signature on the fly and ship deltas to DR MA (MediaAgent).

My experience is, with disk optimized dash copy, it ships the entire full backup to the DR MA, every time it runs the DASH copy, and DR MA calculate signature and save only the difference. When I ran it, it took more than 24 hours, and daily backups accumulated over and over again and never finished. I called their tech support, they said stay put because Dr MA is generating signatures and it take time. once it generate the signature it will be faster. It finally shipped the first copy, I was expected to see improvements, NOT! No there is no DDB involved. Yep it shipped 4TB over 500mb, over and over again. it would never finish, I had more than 15 backups stacked up! 4TB x 15.
I called them and they insisted that disk optimized dash copy ships only deltas. I showed them what I found on network utilized report, they said change it to network optimized.

With Network optimized, it actually read the whole backup, with expense, the local MA's disk queue length went up beyond 50. my local MA has 18 disks, but it sent deltas only wow!. But it took a while to read the 4TB data, it took 11 to 13 hours. it also had to reseed, which end up eating up licenses.

I called again and they said enable the Local cache with disk optimized, it will create a local signature database on the "client computer", and it will make the process faster. hmm...what about the DDB?
It turned out, the local cache database is created not on the SQL database server, it got created on the local MA. It reseeded again, and eat up the license of course. But after it created local signature database, it is now taking 4 to 5 hours to finish the dash copy.


1. "disk optimized + local cache" is the best option for the DASH copy, disk optimized only option will ship full backup to the DR MA and recalculate signature, there is no DDB involved.
2. On DASH copy client computer = Local MA, if you enable the Local cache.
3. Network optimized option does not optimize network traffic, I don't even know why the option even exist.
4. If you create DASH copy, "disk optimized - local cache" is the default option. Enable the local cache and optimize the DASH copy performance and network utilization. Yes, consider when you do that it might reseed!

SCCM updatesdeplyment log's assignment ID is deployment ID, hunting for job error 0x8024000f

One day I found

Job error (0x8024000f) received for assignment ({bdd02889-257b-431c-98b3-965a16ee51d7}) action

I was wondering what the heck the assignment ID is, after googling I found assignment ID is actually deployment ID. BAH!

To find out what that is, there are two ways to do it.

1. From the SCCM console -> monitoring -> Deployments -> and search for the ID.
2. Open SCCM powershell, Get-CMDeployment -DeploymentID 

Besides, a fantastic blog about the 0x8024000f can be found here:

Also reference for the error code:

Wednesday, March 13, 2019

Jan. 2019 Exchange Security Update KB4471389 issue

It seems some admins broke their exchange server after installing KB4471389
Symptoms are all exchange related services dies after the update and admins had to reinstall exchange server in recovery mode.

M$ says that can happen when a admin runs the update in normal mode.

Yes next exchange CU will have that patch as well. So run all exchange updates in admin mode all the time!!

Tuesday, March 05, 2019

WSUS server cleanup

WSUS clean up sometime takes really long time and fail.

I like to explain how you can automate the process so there will be not much to clean up so clean up process will run faster.

You can run the gui version of the WSUS server cleanup or run a powershell script.

On wsus server, open PowerShell in admin mode and run

Get-WsusServer | Invoke-WsusServerCleanup -CleanupObsoleteComputers -CleanupObsoleteUpdates -CleanupUnneededContentFiles -CompressUpdates -DeclineExpiredUpdates -DeclineSupersededUpdates 

Yes you can save it and run it from the task manager! run it daily or weekly.

While the clean up job runs, "Wsus Service" service will be stopped, and automatically started when the clean up job ends. Do not start it manually, it will extend the clean up time.

If the clean up job runs too long and fail, your best option is to rebuild the wsus.

After you rebuild the wsus and if you are using SCUP and got errors like "xxxxx not found on WSUS SMS_WSUS_CONFIGURATION_MANAGER " on WSyncMgr.log while WSUS try to sync, see this blog