This quick start guide provides you with the basic information to configure CloudM Migrate for a migration from Google Vault to Microsoft Azure Storage. It is highly recommended that you read the documentation in full for your platforms in order to understand all of the options available to you during a migration.
Vault file migrations can only be done to 'User' import object types. Shared drive or SharePoint site imports are not currently allowed.
System Requirements (Self-Hosted)
- 64 bit Operating system: Windows Server 2016+ (Clean build recommended)
- Microsoft .NET Framework 4.8
- Recommended system specification - Primary Server:
- 3GHz 8 Core Processor or better
- 200+GB Disk space
- 16+GB Memory
- Recommended system specification - Secondary Server(s):
- 3GHz 4 Core Processor or better
- 100GB Disk space
- 8+GB Memory
If you are looking to complete a 'large migration' (a migration of more than 25,000 users or 10 million objects), refer to the following articles for additional considerations:
- Large Migration Infrastructure
- Running SQL Server or Redis on an Independent Server
- Existing SQL Server Database Configuration
If you are storing Drive document mappings and running a large migration, contact the support team.
Primary and Secondary servers will communicate with the source and destination platforms, and the CloudM Migrate licensing platform using HTTP/HTTPS. Therefore ports 80 and 443 will need to be open between these environments
The URLs accessed for the license server are as follows.
There is also the following to consider:
- SQL server runs on TCP 1433 and UDP 1434 - SQL server needs to be installed and set to TCP or UDP on the Primary/SQL Server. For information on setting the TCP to 1433, refer to this following article.
- Redis runs on port 6379
- gRPC on the below TCP ports. Note: It is possible to customise these ports when installing via Command Line using the instructions here: Installing / Upgrading from the Command Line
30061 (Primary Service)
30062 (Secondary Service)
30063 (Results Logs Service)
- All of Google's API endpoints are listed here: https://support.google.com/a/answer/60764
- All Microsoft 365 URLs and IP address ranges are listed here: https://docs.microsoft.com/en-us/microsoft-365/enterprise/urls-and-ip-address-ranges?view=o365-worldwide
CloudM Migrate Hosted
CloudM Migrate Hosted makes migration as simple as possible. There are no system requirements or installation, you can simply access the Hosted migration platform and configurations are pre-provisioned. See here for more information.
General and Source Platform prerequisites
- Google Vault is connected to your Google Workspace account and so please perform the same steps as described for Google Workspace.
- You will need at least 30GB free disk space on each migration server.
- Additional Google APIs must be enabled using API Manager and Scopes added using the Google Admin Console
- Google Vault API, Scopes (https://www.googleapis.com/auth/ediscovery,https://www.googleapis.com/auth/ediscovery.readonly)
- Google Storage API, Scope (https://www.googleapis.com/auth/devstorage.read_write)
- Billing must be enabled for the Google project being used for the migration. This is necessary to avoid very low Google Vault export quota limits. Enabling it allows for higher export quotas, however, with a possible resulting cost. This cost is something your Google account manager should be able to discuss with you.
- Exporting from Google Vault is subject to quota limitations. If you find user migrations are failing with quota errors you should contact Google to have the limits increased.
Setting up the Service Account and enable the APIs within Google Workspace for CloudM Migrate
Please refer to the Setting up the Service Account and enable the APIs within Google Workspace for CloudM Migrate article.
Destination Platform prerequisites
Setting up a Storage Bucket
- Sign in to Microsoft Azure Home.
- Open the Portal menu, using the Hamburger menu icon.
- Select Storage Accounts.
- On the Storage Accounts screen, select Create
On the Basic tab, enter the following information:
- Subscription - Select the subscription for the new storage account.
- Resource Group - Create a new resource group for this storage account, or select an existing one
- Storage Account Name - Choose a unique name for your storage account. Storage account names must be between 3 and 24 characters in length and may contain numbers and lowercase letters only.
- Region - Select the appropriate region for your storage account.
- On the Advanced tab, set Access Tiers to either HOT or COOL, as required.
On the Data Protection tab, you can configure data protection options for blob data in your new storage account. These options can also be configured after the storage account is created.
- In CloudM Archive, the data retention rules can be rewritten using the Data Retention policies.
- Once you have configured these settings, select Review + Create, and then select Create again.
- The Storage Account will be created. This process can take a couple of minutes to complete.
- You will need to navigate back to the Storage Accounts screen.
- Click on the name of the newly created account in the list of Storage Accounts.
- In the menu, scroll down and select Blob Services > Containers.
- Now, select the + Container button,
In the pop up screen, enter:
- Name - A unique name for the container
- Public Access Level - Set to Private
- Advanced Settings > Encryption Scope - Leave as default.
- Click on Create.
- On the same Storage Account page, select the Settings > Access Keys menu option.
- In the Access Keys page, click on the name of the newly created key and then click again on the current version.
- Select the Show Keys button at the top of the screen.
- Copy the value in the Key 1 > Key field. You will need this value later so you should copy into a document or notepad application.
Setup Azure Key Value (optional - only complete if requiring encryption)
We don't require you to use encryption, but you can use the following process if it is required.
Use the same method (either using encryption or not using encryption) throughout your project to avoid issues.
- In the Search bar at the top of the screen, enter Key Vaults and select the Key Vaults option.
- On the Key Vaults screen, select Create.
- Under the Basics tab, enter the Subscription, Resource Group and Name.
- Select Create + Review, and then select Create again,
- When the Key Vault is being created, you will be taken away from the Key Vaults section so navigate back to Key Vaults using the Search bar.
- On the Key Vaults screen, select the newly created Key Vault.
- Select Keys.
- Select Generate.
- Click on the newly created key, copy the key identifier without the version info at the end. This is the value for the CloudM Migrate Key Vault URL setting.
- Navigate to Azure Active Directory -> App Registrations, and select New registration.
- Fill out the details and select Register.
- On the App Registration screen, click on the name of the newly registered application.
- Select API permissions from the menu on the left side of the screen.
Select Add a permission and add either of the following:
- User Read
- In the menu on the left, select the Certificates & Secrets option.
- Select New client secret and copy the Secret ID. This is the value for the CloudM Migrate Azure key vault client secret setting.
- In the left side menu, select Overview, and copy the Application (client) ID for the CloudM Migrate Azure key vault client ID setting.
- Return to Home > Key Vault,
- Ensure that the Permission Model is set to Vault access policy,
- Select + Add Access Policy,
- On the Add access policy screen, make sure that the Select principal value is set to the required Application (as set in step 4 and 5),
- Ensure that the Key Permissions field is set to Get, Decrypt, Encrypt and Unwrap Key.
- Select Add.
Configure Source Platform settings
Choose Google Vault as the migration source and enter your Google Workspace settings into CloudM Migrate and then click Next.
- Domain Name - The domain name you will be migrating from. This should be the Internet domain name, not the local domain name.
- Admin Username - An administrator account for the domain specified, this will usually be an email address for a Super Admin.
- Authentication Method - Set whether to use a P12 key or a JSON key as the authentication method.
- Service Account Email Address - Before attempting to configure CloudM Migrate, you should have created a Google Cloud platform project and created a service account for it. If you have selected to use a P12 key, you will need to input the service account's email address in this field.
- Private Key - The file path to the P12 or JSON key that was generated and downloaded when creating the OAuth service account.
If you cannot find the private key, go back to Google Cloud Platform service accounts, select your project (if not already selected), use the option button on the right of the service account and click "Create Key". Select P12 or JSON and download the key file.
Select Next to test the connection to the platform
Configure Destination Platform settings
- Domain Name – The name of the Microsoft domain to migrate to. This may be either a primary or secondary domain. Note: you can only migrate users to one domain at a time. If you have both primary and secondary domain users, they must be processed in separate migrations.
- Storage Account Name - The name of the Azure storage account.
- Account Key - The Access Key for the Azure storage account.
Azure Storage Details
- Container Name - The name of the container that has been created in Microsoft Azure, under Home > Storage accounts > (Your Storage Account) > Containers.
- Max File Size - The maximum size of the files to be uploaded (bytes).
- Compress Objects - Compress the Objects before they are uploaded. Setting this to True will use less cloud storage space at the expense of slowing down the Import.
Azure Key Vault Decryption Options (optional - complete if encryption required)
- Key Vault Decryption Key Url - The Url of the decryption key in Azure Key Vault.
- Client Id - Client Id used to access Azure Key Vault.
- Client Secret - Client Secret used to access Azure Key Vault.
Customer-Supplied Key Decryption Options (optional - complete if encryption required)
- Decryption Key File Path - The location of your decryption key file.
Select which items to migrate
It's now time to add which items you'd like to migrate.
To add the items from that you want to migrate from your source platform to your destination platform, select Add items to migrate drop down menu and click on one of the following options:
- Get Items from source - Get a full list of all items in the source platform.
- Bulk add / import items - Upload a CSV file to bulk add users.
- Add User/Resource/ Group/Shared Drive/Team Site/Microsoft Team etc.- Manually add an item of the selected item type.
Selecting a Star next to any specific user or users will prioritize their migration. When a migration starts, threads will be assigned to any starred user first so that their migration can start immediately.
At this point you can choose what to migrate for each user, you can migrate Mail, Contacts, Calendars, Tasks, Classic Sites, Notes and Channels.
Enter your user's full email address within the Export Name field. If you have already created your Microsoft 365 users, then you will just need to enter their username.
Select how much content to migrate
CloudM Migrate lets you decide how much content to migrate to your domain by specifying required date ranges.
If you are changing your email address as part of the migration you can verify that the domain names are correct here. You can also specify specific Address Replacements in the respective section of the advanced settings.
Environment Scan allows you to plan and prepare your migration by performing analysis of your source file and mail environment and reporting important information such as item counts, data volume, permissions, and folder depth.
Reports are produced which can be exported and analysed. Using the information provided you can estimate your migration's duration more accurately, and address any potential issues before your migration even begins.
Selecting Scope of Scan
Depending on your source platform, you can choose to scan files, emails, or both sets of items.
- Leaving the Report on File Permissions setting unchecked will speed up the Environment Scan process.
- In order to include Report on File Permissions in the process, you will need to check this setting AND enable Document Sharing (or a setting related to document sharing) in the Source Platform settings. If either is not enabled, the Environment Scan will not scan or report on File Permissions.
Your CloudM Migrate userlist is used to define the scope of the scan. You can choose to scan all accounts from your list, or restrict the scan to users selected for migration.
How to run an Environment Scan
After entering your source and destination server settings, populating your userlist, and configuring your settings, you will be prompted to run an environment scan. It is optional, but recommended for file platforms.
Click Start and then confirm by selecting Start Environment Scan on the pop-up box to begin the scan. CloudM Migrate will connect to your source environment and capture file and / or mail information. This can take up to several hours, depending on the amount of data present.
Once the scan completes, the data is reported on the Environment Scans page and can be exported to file, using the Export Scan Results option.
Start your migration
To begin your migration, select Start.
Review your migration results
During the migration process, CloudM Migrate will report back in real time exactly who is being migrated and the items being processed. All you now need to do is sit back, relax and wait for your migration to complete.
Check the progress of your migration.
Once complete you can download a full report for your migration.
CloudM Migrate supports delta migrations of all migration types. To run a delta migration after the first pass, simply start the migration again. Already processed mailbox items will be skipped, and file items will be checked for changes and re-migrated if updated since the previous run. More info: