Windows mount point manager
Thus someone viewing the MountedDevices registry key would be able to detect that all four persistent names point to the same volume. The following acreen shot illustrates how persistent names appear in the MountedDevices registry key. The mount manager relies on the Plug and Play device interface notification mechanism to alert it of volume arrival and removal. Upon receiving a Plug and Play notification of the arrival of a volume interface, mount manager sends the client three device control IRPs:.
The mount manager relies entirely upon the client to provide the unique volume ID, and if the client does not provide it, then the mount manager is not able to assign mount points, such as drive letters, to the volume. If a client alerts the mount manager of the arrival of its volume but fails to provide a unique ID for the volume when queried, the volume is placed on a dead mounted device list.
After the mount manager receives a unique volume ID for a newly introduced volume, then it searches its database for all of the persistent names assigned to that unique ID and creates symbolic links to the volume for each persistent symbolic link name. When the mount manager detects that a volume has gone off line then it deletes the symbolic links pointing to the device object without deleting the corresponding symbolic link names in the mount manager's database.
Skip to main content. This browser is no longer supported. Download Microsoft Edge More info. I think it has something to do with Dynamic disks, found some info here , win2k related, but it must be something new to service pack 2 as it worked fine in sp1. I have disabled the Logical Disc Manager but that had no effect, the quest continues.
I've tried myself to solve this puzzle, but I can't seem to figure out which process is making the MountPointManagerRemoteDatabase files.
The information on the net seems to be rather sparse. However during my testing I've discovered that the files aren't really used or written to. If i delete the "System Volume information" folders, they're recreated at startup. If I create the MountPointManagerRemoteDatabase file myself, clear the archive attribute and notes down the timestamp, restarts the machine, I can see that the timestamp has not been changed and the archive-bit is still off, hence no writing to the file.
So some process is checking if the file exists, if i dosen't it's recreated, but it seems the file itself is useless.
I have still not made any progress , I think the file is linked to Dynamic disks so in theory it should only be written to hence a time stamp change if you have changed your drives to dynamic. I don't understand why it needs to be created if the drives are normal, or, why it is only in SP2, as dynamics disks have been around since Win2k I think. I've tried some thing in a VM, but no luck. I've disabled all Services but the dir comes back every restart ;.
If tried to monitor the startup with filemon but no luck. The dir seems to be created bevor the RunOnce from registry. That's the same conclusions I've come to. I've file monitored my machine for 96hours, and the dirs aren't coming back, unless I restart my machine.
I guess we'll have to accept that unless someone hacks some dll-file or find a regkey that supresses that bahaviour. You need to be a member in order to leave a comment. Sign up for a new account in our community. It's easy! Click the root disk, click Apply , and then click OK. This dependency will cause the resource to come online after the disk resource that hosts the mount point is successfully brought online.
Click Move this resource to Another Service or application to move the resource to the appropriate application or service group. Create a dependency in the mounted volume disk resource that specifies the disk that is hosting the mount point folder. This makes the mounted volume dependent on the host volume, and it makes sure that the host volume comes online first.
If you move a mount point from one shared disk to another shared disk, make sure that the shared disks are located in the same group. Try to use the root host volume exclusively for mount points.
The root volume is the volume that hosts the mount points. This practice greatly reduces the time that is required to restore access to the mounted volumes if you have to run the Chkdsk. This also reduces the time that is required to restore from backup on the host volume. If you use the root host volume exclusively for mount points, the size of the host volume must be at least 5 megabytes MB. This reduces the probability that the volume will be used for anything other than the mount points.
In a cluster where high availability is important, you can make redundant mount points on separate host volumes. This helps guarantee that if one root host volume is inaccessible, you can still access the data that is located on the mounted volume through the other mount point.
Because the user data that is located on LUN3 depends on both the D and E volumes, you must temporarily remove the dependency of any failed host volume until the volume is back in service. Otherwise, the user data that is located on LUN3 remains in a failed state.
0コメント