quinta-feira, 16 de março de 2017

Creating Office 365 Random Passwords

If you aren’t using a password synchronization tool with Office 365, such as Windows Azure Directory Sync with Password Sync, Office 365 will randomly generate temporary passwords. This works out fine if the person using the mailbox is setting up their own account in Outlook and on mobile devices. Since ForceChangePassword attribute defaults to true, they are required to change their password the first time they log in. This is a great way to ensure that the passwords stay secure.
But what happens if an IT staff is setting up Outlook and mobile devices, and needs a way to quickly set up Outlook accounts? You could quickly set all of the passwords to one password. All this requires is a CSV of the mailboxes, and this handy PowerShell command:
Import-Csv .new-passwords.csv | foreach {Set-MsolUserPassword -UserPrincipalName $_.email -NewPassword “Passw0rd.” -ForceChangePassword $false}
This is not secure. If everyone knows what the initial password is, one could easily snoop around their colleagues’ emails, spoiling surprise birthday parties or worse.
If you need to set unique, complex passwords for multiple mailboxes, you can easily generate unique random passwords in Excel, and export them to a CSV file. PowerShell can then be used to update a group of mailbox passwords.
Here’s an overview of the process:
  1. Create an Excel file, and label column one email and column two passwords
  2. Add the email addresses to the first column and the random character generator code in the second column (see below)
  3. Export the sheet as a CSV file, and name it new-passwords.csv
    • The random character formula won’t be exported, only the generated passwords.
    • Note: Each time you modify or change the file, it generates all new passwords, so be sure to save the CSV file.
  4. Use Powershell to set the new passwords
Here’s the Excel formula:
This formula will create an 8 character password, with 4 letters and 4 numbers, similar to the temporary passwords that Office 365 initially creates.
=CHAR(RANDBETWEEN(65,65+25))&CHAR(RANDBETWEEN(97,122))&CHAR(RANDBETWEEN(97,122))&CHAR(RANDBETWEEN(97,122))&RANDBETWEEN(0,9)&RANDBETWEEN(0,9)&RANDBETWEEN(0,9)&RANDBETWEEN(0,9)
Use PowerShell to connect to Office 365. Be sure that the csv file is in the PowerShell working directory.
Be sure to test first:
  • Use the –WhatIf modifier to test
  • You can also run it on one account to test. Use a CSV file with one mailbox, and confirm that everything works.
Run the following:
Import-Csv .new-passwords.csv | foreach {Set-MsolUserPassword -UserPrincipalName $_.email -NewPassword $_.passwords -ForceChangePassword $false -WhatIf}
Once you’ve tested, remove the –WhatIf:
Import-Csv .passwords-new.csv | foreach {Set-MsolUserPassword -UserPrincipalName $_.email -NewPassword $_.passwords -ForceChangePassword $false}
This will quickly set unique and complex passwords on all the mailboxes, all with ForceChangePassword set to $false, and a spreadsheet with all the passwords available.

quinta-feira, 1 de setembro de 2016

How to make an existing Hyper-V Virtual Machine Highly Available

Windows Server 2012 Logo
If you have a running Hyper-V Virtual Machine with is not part of a cluster you can make this virtual machine highly available via the Failover Cluster Manager. First of all the Virtual Machine has to be stored on a shared storage. In Windows Server 2008 R2 this means the Virtual Machine and the attached Virtual Disks had to be stored on a Cluster Shared Volume (C:\ClusterStorage\…), with Windows Server 2012 the VM could also be stored on a SMB 3.0 file share.
Windows Server 2008 R2 Make Virtual Machine Highly Available
And there is also a big difference in Windows Server 2008 R2 Hyper-V and in In Windows Server 2008 R2 the Virtual Machine had to be offline during you add the VM to the Cluster, and in Windows Server 2012 you can do this online.
First open the Failover Cluster Manager and right click on your Cluster Name in the menu select “Configure Role”
Windows Server 2012 Failover Cluster Manager Configure Role
This started the High Availability Wizard. First select the role you want to make high available, in the Hyper-V case this is “Virtual Machine”.
Windows Server 2012 Failover Cluster Manager High Availability Wizard
Now select the Virtual Machine you want to add to the cluster. Note: In Windows Server 2012 the Virtual Machine can be online, in Windows Server 2008 R2 the Virtual Machine had to be offline.
Select Virtual MachineAfter clicking next on the wizard, you the VMs will be added to the Failover Cluster.
Btw. thanks to Philipp Witschi (itnetx), who I now owe a dinner.

quarta-feira, 8 de junho de 2016

How to Configure Database Mail in SQL Server 2008 R2?

How to Configure Database Mail in SQL Server 2008 R2?

In this article we will talk about configuration of Database mail feature in SQL Server 2008 R2. This feature was first introduced in SQL Server 2005. So, same steps will work for SQL Server 2005 and 2008 also.
Before we start the configuration make sure we have SMTP server details, which we can use to send emails form SQL Server. Also, we have enabled the database mail feature. In order to do that either we can run the below script: –
sp_configure ‘show advanced options’, 1;
GO
RECONFIGURE;
GO
sp_configure ‘Database Mail XPs’, 1;
GO
RECONFIGURE
GO
sp_configure ‘show advanced options’, 0;
GO
RECONFIGURE;
GO
Or, just start the configuration steps below and at the very first point, configuration wizard will show a window stating Database mail feature is not enabled, click yes to enable the feature.
NOTE: – We have already enabled the database mail feature before we started the below configuration. So, we didn’t get the window stating Database mail feature is not enabled.
STEPS
1) Connect to SQL Server instance from SSMS and navigate to Management a Database Mail. Right click on Database Mail and click on Configure Database Mail option.

2) Here select the first option “ Setup database mail…..” and move to next step.

3) Type the desired profile name and profile description. Then click on Add button to add the SMTP accounts to this profile.

4) Since this is the first profile we are creating, there will not be any Account name available. Here click on New Account to create a new account.

5) Provide the below details: –
Account Name –> Desires account name for this SMTP account
Description –> Description of the account
E-mail address –> Email account from which emails will be sent. When recipient receives email, the sender email is shown as this address.
Display Name –> Name associated with the email address
Reply e-mail –> where reply to the emails will be forwarded.
Server name –> Provide the SMTP, which is to be used to send emails from SQL Server.
Port Number –> Port number to be used by this account. Default is 25.
SMTP Authentication –> Select the desired authentication to be used and then click OK. Then click Next on new window.

6) This screen will show the public profiles available. We can see Test profile which we created above.

7) This screen will show the private profiles available. We can see Test profile which we created above. You can select the account that you want to associate to private profile.

8) Make necessary changes to the Database mail system parameters and move to next step.

9) Database Mail Configuration Wizard will show the summary of the actions to be performed. Click finish and database mail is configured.

10) You can test the newly configured database mail feature by right clicking on Database Mail and click on Test email. Then select the recipient email and send the test email.

Configurar o SQL Server Agent Mail para usar o Database Mail

Configurar o SQL Server Agent Mail para usar o Database Mail

 Aplica-se a: SQL Server 2016
Este tópico descreve como configurar o SQL Server Agent para usar o Database Mail para enviar notificação e alertas no SQL Server 2016 usando o SQL Server Management Studio.

Pré-requisitos

  • Habilite o Database Mail.
  • Crie uma conta do Database Mail para a conta de serviço do SQL Server Agent a ser usada.
  • Crie um perfil do Database Mail para a conta de serviço do SQL Server Agent a usar e adicione o usuário a DatabaseMailUserRole no banco de dados msdb.
  • Defina o perfil como o perfil padrão para o banco de dados msdb.

Segurança

Permissões

O usuário que cria as contas de perfis e executa procedimentos armazenados deve ser membro da função de servidor fixa sysadmin.
Para configurar o SQL Server Agent para usar o Database Mail
  • No Pesquisador de Objetos, expanda uma instância do SQL Server.
  • Clique com o botão direito do mouse em SQL Server Agent e então clique em Propriedades.
  • Clique em Sistema de Alerta.
  • Selecione Habilitar Perfil de Email.
  • Na lista Sistema de email, selecione Database Mail.
  • Na lista Perfil de email, selecione um perfil de email para o Database Mail.
  • Reinicie o SQL Server Agent.
As tarefas a seguir são necessárias para concluir a configuração do Agent a fim de enviar alertas e notificações.
  • Alertas
    Os alertas podem ser configurados para notificar um operador sobre um evento de banco de dados em particular ou uma condição do sistema operacional.
  • Operadores
    Os operadores são alias de pessoas ou grupos que podem receber notificação eletrônica.

segunda-feira, 9 de maio de 2016

Analysis Services Cube processing fails with error "OLE DB error: OLE DB or ODBC error: Operation canceled; HY008."

Lots of similar errors during processing

Analysis Services processing (on Adventureworks DW cube in my example) can fail with this error
“OLE DB error: OLE DB or ODBC error: Operation canceled; HY008.”
In SQL OLEDB terms HY008 means DB_E_CANCELED (aka your query was cancelled purposefully by the caller)
If you get lucky, you can see a better error from Management Studio
Internal error: The operation terminated unsuccessfully.
    OLE DB error: OLE DB or ODBC error: Query timeout expired;
HYT00.
    Errors in the OLAP storage engine: An error occurred while the dimension, with the ID of ‘Market Basket ~MC-Order Number’, Name of ‘Market Basket ~MC-Order Number’ was being processed.
image
HYT00 means DB_E_ABORTLIMITREACHED / 0x80040E31  or a timeout expired, so the timeout expired due to the SQL_QUERY_TIMEOUT setting, meaning the command timeout or query timeout kicked in to kill the running query and cancel the work.

Similar failure seen by Processing from XMLA results messages
<Batch xmlns="http://schemas.microsoft.com/analysisservices/2003/engine">
    <Parallel>
        <Process xmlns:xsd="http://www.w3.org/2001/XMLSchema" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xmlns:ddl2="http://schemas.microsoft.com/analysisservices/2003/engine/2" xmlns:ddl2_2="http://schemas.microsoft.com/analysisservices/2003/engine/2/2" xmlns:ddl100_100="http://schemas.microsoft.com/analysisservices/2008/engine/100/100" xmlns:ddl200="http://schemas.microsoft.com/analysisservices/2010/engine/200" xmlns:ddl200_200="http://schemas.microsoft.com/analysisservices/2010/engine/200/200" xmlns:ddl300="http://schemas.microsoft.com/analysisservices/2011/engine/300" xmlns:ddl300_300="http://schemas.microsoft.com/analysisservices/2011/engine/300/300">
            <Object>
                <DatabaseID>AdventureWorksDW2012Multidimensional-EE</DatabaseID>
            </Object>
            <Type>ProcessFull</Type>
            <WriteBackTableCreation>UseExisting</WriteBackTableCreation>
        </Process>
    </Parallel>
</Batch>
You get a ton of different errors appended to a long string. The first connection probably got a timeout, but you may not noticed, because all the other connections get a cancellation notification, so Analysis Services reports them in a seemingly random order in a long string. Good luck finding the timeout indicator in this text…
Internal error: The operation terminated unsuccessfully. Internal error: The operation terminated unsuccessfully. Server: The current operation was cancelled because another operation in the transaction failed. Internal error: The operation terminated unsuccessfully. OLE DB error: OLE DB or ODBC error: Communication link failure; 08S01; Shared Memory Provider: No process is on the other end of the pipe.
; 08S01.
Errors in the OLAP storage engine: An error occurred while the dimension, with the ID of ‘Dim Time’, Name of ‘Date’ was being processed. Errors in the OLAP storage engine: An error occurred while the ‘Fiscal Year’ attribute of the ‘Date’ dimension from the ‘AdventureWorksDW2012Multidimensional-EE’ database was being processed. OLE DB error: OLE DB or ODBC error: Communication link failure; 08S01; Shared Memory Provider: No process is on the other end of the pipe.

08S01 means DB_E_CANNOTCONNECT from the provider, so this is a bit of a misnomer. It could be that it can’t connect, or its been disconnected / cancelled by the provider or the server if the query was cancelled.

Always check the OLAP\Log\Msmdsrv.log file too. You might get the error message in case your application didn’t log it.
{
(6/12/2012 4:52:21 PM) Message:  (Source: \\?\C:\OLAP\Log\msmdsrv.log, Type: 3, Category: 289, Event ID: 0xC1210003)
(6/12/2012 4:52:21 PM) Message: OLE DB error: OLE DB or ODBC error: Operation canceled; HY008. (Source: \\?\C:\OLAP\Log\msmdsrv.log, Type: 3, Category: 289, Event ID: 0xC1210003)
(6/12/2012 4:52:22 PM) Message: OLE DB error: OLE DB or ODBC error: Operation canceled; HY008. (Source: \\?\C:\OLAP\Log\msmdsrv.log, Type: 3, Category: 289, Event ID: 0xC1210003)
(6/12/2012 4:52:24 PM) Message: OLE DB error: OLE DB or ODBC error: Operation canceled; HY008. (Source: \\?\C:\OLAP\Log\msmdsrv.log, Type: 3, Category: 289, Event ID: 0xC1210003)
(6/12/2012 4:45:33 AM) Message: OLE DB error: OLE DB or ODBC error: Operation canceled; HY008. (Source: \\?\C:\OLAP\Log\msmdsrv.log, Type: 3, Category: 289, Event ID: 0xC1210003)
}
That indicates that The OLE DB provider reported an error. hex code 0xC1210003.

What does all this mean in context?

When Analysis Services process a cube (or a smaller object like a dimension or measure group), it sends many large sql queries to the relational database engine through an OLEDB provider, such as (SELECT * FROM DimTABLE1, SELECT * FROM FactTable1, etc)
These queries can take minutes to hours, depending on how many joins there are and how big the tables (and partitions) are. It is dependent entirely on your cube design, and your dimension and measure group relationships in the design.
To connect to the relational data source, there are connection strings stored in the cube design to point to the database server.
image
This is a connection string that gets saved into the AS database design. It can point to SQL Server, or it can point to other 3rd party relational databases (Teradata, Oracle, etc) In the below screenshot I am using the SQL Server 2012 OLE DB provider named SQLNCLI11.1
image

Whenever a command (a TSQL query) is issues to this data source, the command timeout property is set by the analysis services server.
This is some ADO pseudo code to show how a command timeout is set by the code that runs Analysis Services internally…
conn1.Open();
command = conn1.CreateCommand();
command.CommandText = "Select * from DimTable";
command.CommandTimeout = 15;
So in the above example, if 15 seconds passes and the query hasn’t yet finished, the OLEDB provider will cancel the query on behalf of the caller. The caller doesn’t have to keep any timer because the timeout is set in the provider layer, therefore the caller doesn’t really know if the query fails how long it took and if it was a timeout or not.
In OLEDB terms, this property is called DBPROP_COMMANDTIMEOUT on DBPROPSET_ROWSET object. This property lets you run queries for a certain amount of time, and if the command doesn’t finish it will be cancelled. In SQL Server you can see such timeouts with an Attention event in the profiler trace, and the command’s duration will exactly match the duration of the command timeout.
Note that command timeout setting is not set on the Connection or the connection string itself, so it has to be set later per command.  The similar connection timeout is DBPROP_INIT_TIMEOUT on  DBPROPSET_DBINIT. In Analysis Services the connection timeout is a separate property ExternalConnectionTimeout. This would be applicable for making initial contact with the server, checking the accounts logging in, and such, but not very applicable for running long queries.
So how do you then set this OLE DB command timeout in the Analysis Services caller?
There is a setting (happens to be hidden behind the advanced options) on the Analysis Services instance that is set to a default of 60 mins = 1 hour.  That’s a pretty high timeout, meaning that if any one TSQL query to the relational database lasts 1 hour or more, it will be cancelled by the OLEDB provider used to connect to that system, and the Analysis services command (processing most likely) will fail.
ExternalCommandTimeout http://msdn.microsoft.com/en-us/library/ms174921.aspx
An integer property that defines the timeout, in seconds, for commands issued to external servers, including relational data sources and external Analysis Services servers.
The default value for this property is 3600 (seconds).
If you expect the processing queries to take more than 1 hour, then you might raise the timeout even higher than 1 hours. For example, I was working on a cube this week and the processing join queries take around 9 hours to complete on a 2TB database with some very large complex joins.
Right click on the server name in Management Studio > Properties. Then check “Show Advanced (All) Properties”. Then adjust the ExternalCommandTimeout setting.
image  image
Now when it runs external queries to talk to the relational database, it will set the Command Timeout to the value specified so that it can run a long time without failure.

Should my processing queries really run this long?

Probably not! Maybe you should invest time to tune the joins that AS does when it runs all those processing queries in the background on your behalf, or partition your measure groups so that the unit of work done by processing is a smaller chunk of data rather than all data at once.
Partitioning requires a lot of thought and cube design work, so I won’t go into it here, but if you need to read more see this article: http://www.sqlservercentral.com/articles/Analysis+Services+(SSAS)/70282/ They say if you have more than 20 million rows in a table, and you are having AS processing performance problems, then consider partitioning.
After running the processing once or twice on the AS cubes you can look for missing indexes easily like this if you use SQL Server as your relational data warehouse system. Take a few minutes to tune – add some indexes to the relational data warehouse tables to help tune those huge joins. This is some SQL code I borrowed from our support tool PSSDiag which we use to identify the most helpful missing indexes, that works on SQL Server 2005, 2008, R2, and 2012. Find the indexes on the fact and dimension tables that help improve the performance the most.
PRINT ‘Missing Indexes: ‘ PRINT ‘The “improvement_measure” column is an indicator of the (estimated) improvement that might ‘ PRINT ‘be seen if the index was created. This is a unitless number, and has meaning only relative ‘ PRINT ‘the same number for other indexes. The measure is a combination of the avg_total_user_cost, ‘ PRINT ‘avg_user_impact, user_seeks, and user_scans columns in sys.dm_db_missing_index_group_stats.’ PRINT PRINT ‘– Missing Indexes –‘ SELECT CONVERT (varchar, getdate(), 126) AS runtime, mig.index_group_handle, mid.index_handle, CONVERT (decimal (28,1), migs.avg_total_user_cost * migs.avg_user_impact * (migs.user_seeks + migs.user_scans)) AS improvement_measure, ‘CREATE INDEX missing_index_’ + CONVERT (varchar, mig.index_group_handle) + ‘_’ + CONVERT (varchar, mid.index_handle) + ‘ ON ‘ + mid.statement + ‘ (‘ + ISNULL (mid.equality_columns,) + CASE WHEN mid.equality_columns IS NOT NULL AND mid.inequality_columns IS NOT NULL THEN ‘,’ ELSE END + ISNULL (mid.inequality_columns, ) + ‘)’ + ISNULL (‘ INCLUDE (‘ + mid.included_columns + ‘)’, ) AS create_index_statement, migs.*, mid.database_id, mid.[object_id] FROM sys.dm_db_missing_index_groups mig INNER JOIN sys.dm_db_missing_index_group_stats migs ON migs.group_handle = mig.index_group_handle INNER JOIN sys.dm_db_missing_index_details mid ON mig.index_handle = mid.index_handle WHERE CONVERT (decimal (28,1), migs.avg_total_user_cost * migs.avg_user_impact * (migs.user_seeks + migs.user_scans)) > 10 ORDER BY migs.avg_total_user_cost * migs.avg_user_impact * (migs.user_seeks + migs.user_scans) DESC PRINT GO

Sometimes queries are cancelled or fail for other reasons besides timeouts…

We have many calls to support with these error for non-timeout cases too. The most common other cause of a processing TSQL query being cancelled is out-of-memory kinds of concerns.
There can be competition for memory between SQL Server Database Engine (SQLServr.exe), Analysis Services(MsMdsrv.exe), Integration Services packages (DTExec.exe / ISExec.exe), Reporting Services running on the same box. Maybe you need to throttle back the other services. The most common being to lower the SQL Server ‘maximum server memory’.
Remember that processing is the most intensive time for a normal SQL Server, since the Analysis Services throws several large queries with plenty of joins to the SQL relational database engine at the same time.
exec sp_configure 'show advanced',1;
reconfigure;
exec sp_configure 'min server memory';
exec sp_configure 'max server memory';
-- look at config_value in the results for the current MB setting configured
The ETL processes that typically run (SSIS packages to import large sets of data from a transactional system into a data warehouse system) rarely benefit from the wonderful buffering of the SQL Server database Engine’s buffer pool, because BULK INSERTs simply don’t require much memory. The SELECT and UPDATE and JOIN parts of the ETL processing (such as Lookups and slowly changing dimension updates) during the ETL phase of building a data warehouse certainly could benefit from SQL’s large buffer pool, so lowering the SQL Engine’s memory may have a side effect on those parts of the ETL imports that usually go on just before cube processing. That is, reading data from RAM is 1000-1million times faster than reading from your average spinning disk drive, therefore shrinking the SQL buffer pool means more disk reads, and unless you have high end SSD solid state disks or a high end SAN you may wait a little more.
Another obvious thing worth calling out, if you are on a 32-bit server, one simple answer is to upgrade to a x64 bit processor, so that Analysis Services can address more memory at any one time and make use of more of the RAM in the computer.

In general processing is a complicated thing.

There are many other “dimensions” of processing I can’t fit into the blog post. Pardon the pun.


  • On the processing command are you specifying to run in Sequential order, or running Parallel tasks? Check the SSIS package or XMLA job that runs the processing.
SSIS Analysis Services Processing Task settings
XMLA – this example show its running up to 8 tasks in parallel
<Batch xmlns="http://schemas.microsoft.com/analysisservices/2003/engine">
  <Parallel MaxParallel="8">
    <Process xmlns:xsd="http://www.w3.org/2001/XMLSchema" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xmlns:ddl2="http://schemas.microsoft.com/analysisservices/2003/engine/2" xmlns:ddl2_2="http://schemas.microsoft.com/analysisservices/2003/engine/2/2" xmlns:ddl100_100="http://schemas.microsoft.com/analysisservices/2008/engine/100/100" xmlns:ddl200="http://schemas.microsoft.com/analysisservices/2010/engine/200" xmlns:ddl200_200="http://schemas.microsoft.com/analysisservices/2010/engine/200/200" xmlns:ddl300="http://schemas.microsoft.com/analysisservices/2011/engine/300" xmlns:ddl300_300="http://schemas.microsoft.com/analysisservices/2011/engine/300/300">
      <Object>
        <DatabaseID>AdventureWorksDW2012Multidimensional-EE</DatabaseID>
      </Object>
      <Type>ProcessFull</Type>
      <WriteBackTableCreation>UseExisting</WriteBackTableCreation>
    </Process>
  </Parallel>
</Batch>
Maybe you can scale back the number of parallel tasks (if you manually override “Let the Server decide” default), or run in Sequential mode to see if the errors go away, since it will take less memory to run a sequence of one task at a time rather than many at once. The tradeoff may be that it runs longer because you can’t push the hardware to the same throughput limits.
AS uses memory quota to control the number of concurrent jobs. Each job will calculate how much memory it needs to finish the job and request the memory quota based on its estimate. The job will only proceed when the memory quota is granted.  We estimate the quota for aggregation job, the configuration setting that controls the memory usage estimates are  <AggregationMemoryLimitMin> and <AggregationMemoryLimitMax>
To achieve more parallelism for processing you could take this advice to tune the settings…


Advanced scenario… what if it is none of the above?

In support sometimes the simple solutions just don’t work. When this happens we result to advanced measurements and eventually debugging.

If memory is the culprit I would gather a profiler trace and these performance counters to better investigate the cause next time when it happens.

A. Set up Windows performance monitor (Start > Run > perfmon) to produce a trace of resource consumption.
B. Right click on Counter Logs icon in the tree under Performance Logs, and begin a new counter log. Name the log.
C. Add the counter for the following Objects, ALL counters for each object, ALL instances for each object.
-Memory
-MSAS* — all objects (for default instances of AS)
-MSOLAP$InstanceName* — all objects (for named instances of AS)
-MSSQL* — all objects (for SQL Engine)
-Paging File
-Process
-Processor
-System
-Thread
D. Sample every 15 seconds.
E. On Log tab, specify the directory and file name strategy, as a Binary File.
F. To get Perfmon to rollover to a new file once a day, on the Schedule tab, choose
G. Stop log after “1 day”, and when the log file closes “Start a new log file”
Reviewing the performance monitor results
I would look at SQL Server engine’s counter to see if the SQL Memory > Total Server Memory was growing out of control.
I would look at Memory > Available MBytes counter to see how much free memory was available to the processes running in Windows.
I would also look at Process > Private Bytes for the various executable processes to see how much each takes in comparison.
I would look for signs in the MSAS/MSOLAP counters. If the usage amount goes above the High KB amount, then AS would have to trim some of the buffers in memory.
  • Memory Usage KB
  • Memory Limit High KB
  • Memory Limit Low KB
  • Memory Limit Hard KB
If the usage KB amount exceeds the Hard KB limit, then Analysis services may cancel all current work and go into “panic mode” to kill off the memory consumers since they are heeding the throttling fast enough. This may manifest itself in similar errors, but usually the error is more descript such as “The Operation Has been Cancelled” or  “The session was cancelled because it exceeded a timeout setting (session orphaned timeout or session idle timeout) or it exceeded the session memory limit.”

The advanced debug route to find the code which saw the abort

The abort error text Internal error: The operation terminated unsuccessfully.  translates into hex code hresult 0xC1000007
The next most common error The OLE DB provider reported an error. translates into hex code 0xC1210003
I can add these hresults into the minidumperrorlist tag of the msmdsrv.ini file to get a minidump from the time of the failure. Following this KB article http://support.microsoft.com/kb/919711

    • Open notepad as administrator (elevated UAC)
    • Open the msmdsrv.ini file, which lives in a default folder such as C:\Program Files\Microsoft SQL Server\MSAS11.Instancename\OLAP\Config
    • Add a comma and the the hresult of the error to the list of errors in the tag
<MinidumpErrorList>0xC1000000, 0xC1000001, 0xC102003F, 0xC1360054, 
0xC1360055, 0xC1000007, 0xC1210003</MinidumpErrorList>
    • Next time the error occurs a SQLDmpr0001.mdmp minidump will be produced into the OLAP\Log folder. Internally in Microsoft we review the dump file to look for context clues about why the failure occurred.
    • If we need to get a full memory dump to fully see the insides of the Analysis Services process, we can raise two more flags 0x3f4 and 0x4 to get that. Then we can see the query syntax and other processing commands that might be running from the dump alone. Caution: the file can be very big – as big as the Memory (RAM) private bytes consumed by the MsMdSrv.exe process as see in Task manager.
<SQLDumperFlagsOn>0x3f4</SQLDumperFlagsOn>
<SQLDumperFlagsOff>0x0</SQLDumperFlagsOff>
<MiniDumpFlagsOn>0x4</MiniDumpFlagsOn>
<MiniDumpFlagsOff>0x0</MiniDumpFlagsOff>
Example of a minidump from this exercise

Are there other timeouts in Analysis Services for long queries?

Query Timeout is another setting on the Data Source
image
As far as I can tell this setting that seems not to apply readily to processing.
Perhaps this setting applies to the connection pool and will help expire idle connections that are no longer needed, but I don’t think this setting applies to the commands that are run during processing or ROLAP commands.
In that regard, I think our docs might be wrong on this point. http://msdn.microsoft.com/en-us/library/ms189027
Query Timeout
Specifies how long a connection request will be attempted before it is dropped.
Type the query timeout in the following format:
<Hours>:<Minutes>:<Seconds>
This property can be overruled by the DatabaseConnectionPoolTimeoutConnection server property. If the server property is smaller, it will be used instead of Query Timeout.
For more information about the Query Timeout property, see Timeout. For more information about the server property, see OLAP Properties.
There are many other timeouts in Analysis Services, such a ForceCommitTimeout (for processing to kill user queries should MDX queries hold locks that block processing commit), CommitTimeout (for processing to give up if it gets blocked at commit phase), ServerTimeout, (for queries to timeout after some time) and the connection pool settings sucha s IdleConnectionTimeout, IdleOrphanSessionTimeout, MaxIdleSessionTimeout, MinIdleSessionTimeout, DatabaseConnectionPoolConnectTimeout, and the ones we discussed ExternalConnectionTimeout and ExternalCommandTimeout.. We’ll go into those other ones later!

SQL Server Best Practices -

SQL Server Best Practices Article

Published: June 7, 2007

Writers: Denny Lee, Nicholas Dristas, Carl Rabeler
Contributors: Kevin Cox, Craig Utley, Eric Jacobsen, Akshai Mirchandani
Technical Reviewer: Sasha (Alexander) Berger, Dave Wickert
Applies To: SQL Server 2005 SP2
Summary: Following the best practices described in this article will help you improve the performance of OLAP database processing.
On This Page
Introduction
Dimension Processing Best Practices
Cube Processing Best Practices
Parallel Processing Best Practices
Troubleshooting Best Practices
Summary

Introduction

The recommendations in this white paper provide best practices for processing in Microsoft® SQL Server™ 2005 Analysis Services. These performance optimizations are based on real-world implementations with some of our top customers and partners who use SQL Server 2005 Analysis Services Enterprise Edition.
This document presents Analysis Services processing best practices recommendations in the following sections:

Dimension Processing Best Practices

While dimension processing is quite fast in Analysis Services 2005, here are some best practices to help optimize dimension processing. For more information on the inner workings of Analysis Services dimension processing, see the "Processing Engine" section in Analysis Services 2005 Processing Architecture by T.K. Anand.

Add indexes to the underlying table

In most cases, adding indexes to the underlying table improves dimension processing because the SQL statement that is generated is in the form of SELECT DISTINCT AttribA, AttribB FROM DimensionTable. Building good attribute / hierarchy relationships forces the Analysis Services engine to build multiple smaller DISTINCT queries instead of one large and very expensive SELECT DISTINCT AttribA, AttribB, ..., AttribN query. Therefore, additional indexes improve the performance of these DISTINCT queries.

Create a separate table for dimension processing

If you experience slow dimension processing, creating a new table that is used only to process the dimension can improve processing speed. For example, in the case of a snowflake dimension where multiple tables are joined together to define your dimension, consider creating a separate new table specifically for processing the dimension. This is especially important if you have dimensions that are designed on top of disparate databases; this way you can avoid OPENROWSET processing queries.

Use SQL views to implement query binding for dimensions

While query binding for dimensions does not exist within Analysis Services 2005, you can implement it by using a view (instead of tables) for your underlying dimension data source. That way, you can use hints or other SQL tuning techniques to optimize the SQL statement that accesses the dimension tables through your view.

Be careful when you use the ProcessUpdate enumeration

If you use ProcessUpdate and dimension members are being changed or deleted, flexible aggregations will be dropped. These must be recreated by processing the indexes on the affected partitions and objects or by enabling lazy aggregations. Otherwise, there will not be any aggregations for users to query against; this results in slower query performance. Also, if rigid relationships have been defined and a change is detected, an error will be returned, requiring you to perform a full process.

Use the ProcessAdd enumeration if you are only adding new dimension members

The ProcessAdd process enumeration is not available in the processing UI but it is available within XMLA. Introduced in Analysis Services 2005, it is essentially an optimized version of ProcessUpdate in cases where only new members are being added. To use ProcessAdd correctly to improve performance, restrict the dimension table to only the new rows added. For more information on this, including sample code, see the second set of code in Analysis Services 2005 Processing Architecture on the Microsoft Developers Network (MSDN). This processing type allows dimension processing to be much faster as it will never delete or update existing dimension members nor does it drop flexible aggregations.

Upgrade to Analysis Services SP2 because it has dimension process caching

Analysis Services Service Pack (SP2) provides processing improvements through dimension processing caching. To take advantage of this feature, for larger systems add memory so that Analysis Services can use it. (It is not uncommon for Analysis Services to use 10 GB (gigabytes) of memory if it is available.) This prevents it from paging to disk.
Performance improvements when using dimension process caching are typically seen when processing on servers with two or more processors. For example, the following table shows processing times for a 4-GB RAM, four-processor server that is processing 32 partitions in parallel. Both the Analysis Services and SQL Server databases are on the same server.
Service Pack
Process time (hh:mm:ss.sss)
SP1
00:13:28.020
SP2
00:12:29.966

Cube Processing Best Practices

To ensure fast and effective cube processing, it is important to remember that cube processing is performed in Analysis Services by executing Analysis Services-generated SQL statements against the underlying relational database. Because of this, it is possible (when necessary) to tune the underlying SQL statements and/or update statistics. Adding indexes to the underlying tables may improve performance as well, but test this first as table scanning is often a faster approach for cube processing.

Use integer keys if at all possible

When working with any version of Analysis Services, ensure that the key columns are integer. This is important for both SQL and non-SQL RDBMS systems because of the SQL statement generated by Analysis Services when it is processing a cube, measure group, or partition. The resulting query plan and execution of this SQL statement will be more efficient if the joins are between integer keys.
Performance improvements are even more apparent with Analysis Services SP2 due to its dimension processing caching feature. These improvements also optimize in the case of two composite integer keys of sizes 1, 2, 4, and 8 bytes. While string keys provide limited improvement, this is not as evident as when using integer keys.
Following is the SQL query that is generated by Analysis Services when processing the [Internet Sales] measure group in the [Adventure Works DW] OLAP database.
SELECT 
     [dbo_FactInternetSales].[CustomerKey] AS 
     [dbo_FactInternetSalesCustomerKey0_0],
     [dbo_FactInternetSales].[PromotionKey] AS 
     [dbo_FactInternetSalesPromotionKey0_1],
     [dbo_FactInternetSales].[SalesTerritoryKey] AS 
     [dbo_FactInternetSalesSalesTerritoryKey0_2],
     [dbo_FactInternetSales].[OrderDateKey] AS 
     [dbo_FactInternetSalesOrderDateKey0_3],
     [dbo_FactInternetSales].[ShipDateKey] AS 
     [dbo_FactInternetSalesShipDateKey0_4],
     [dbo_FactInternetSales].[DueDateKey] AS 
     [dbo_FactInternetSalesDueDateKey0_5],
     [dbo_FactInternetSales].[ProductKey] AS 
     [dbo_FactInternetSalesProductKey0_6],
     [dbo_FactInternetSales].[CurrencyKey] AS 
     [dbo_FactInternetSalesCurrencyKey0_7]
FROM  (
     SELECT 
          [dbo].[FactInternetSales].[ProductKey],
          [dbo].[FactInternetSales].[OrderDateKey],
          [dbo].[FactInternetSales].[DueDateKey],
          [dbo].[FactInternetSales].[ShipDateKey], 
          [dbo].[FactInternetSales].[CustomerKey], 
          [dbo].[FactInternetSales].[PromotionKey],
          [dbo].[FactInternetSales].[CurrencyKey],
          [dbo].[FactInternetSales].[SalesTerritoryKey],
          [dbo].[FactInternetSales].[SalesOrderNumber],
          [dbo].[FactInternetSales].[SalesOrderLineNumber],
          [dbo].[FactInternetSales].[RevisionNumber],
          [dbo].[FactInternetSales].[OrderQuantity],
          [dbo].[FactInternetSales].[UnitPrice],
          [dbo].[FactInternetSales].[ExtendedAmount],
          [dbo].[FactInternetSales].[UnitPriceDiscountPct],
          [dbo].[FactInternetSales].[DiscountAmount],
          [dbo].[FactInternetSales].[ProductStandardCost],
          [dbo].[FactInternetSales].[TotalProductCost],
          [dbo].[FactInternetSales].[SalesAmount],
          [dbo].[FactInternetSales].[TaxAmt],
          [dbo].[FactInternetSales].[Freight],
          [dbo].[FactInternetSales].[CarrierTrackingNumber],
          [dbo].[FactInternetSales].[CustomerPONumber]
     FROM [dbo].[FactInternetSales]
     WHERE OrderDateKey >= '915' AND OrderDateKey <= '1280'
)
AS [dbo_FactInternetSales]
ORDER BY [dbo_FactInternetSales].[CustomerKey]
ASC
The SQL statement, and particularly the query plan, that are generated by Analysis Services are different when the underlying fact table for the [Internet Sales] Measure Group uses varchar keys as shown in Figure 1.
Cc966525.ASPBPR01(en-us,TechNet.10).gif
Figure 1   Estimated query plan between varchar-key vs. integer-key fact tables
As you can see from the estimated query plan in Figure 1, the varchar-key fact table (Query 2) has a higher cost associated with the table scan than does the integer-key fact table (Query 1). Specifically, there are higher subtree costs associated with both the Sort and Table Scan tasks for the varchar-key fact table.
Task
Total Subtree Cost
Total Subtree Cost

Query 1 (integer keys)
Query 2 (varchar keys)
Sort
3.19582
3.375974
Table Scan
0.9134235
1.247498

Use query binding to optimize processing

You can improve cube processing performance by tuning the underlying SQL statements executed. For this reason, it is important to use query binding instead of table binding when you build your measure group partitions (see the SQL statements in Figures 2 and 3). Looking at the Adventure Works OLAP cube in Microsoft Visual Studio®, notice the Partitions tab as shown in Figure 2.
Figure 2   Adventure Works Partitions tab
Figure 2   Adventure Works Partitions tab
Clicking on the partition source opens the Partition Source dialog box as shown in Figure 3. As you can see, by using the query binding binding type, the data source is a developer-created SQL statement.
Cc966525.ASPBPR03(en-us,TechNet.10).gif
Figure 3   Partition Source dialog box
By using this binding type option, you can optimize the SQL query that is sent to the relational data source.

Partition measure groups if you have a lot of data

It is generally a good idea to map the measure group partitions to either separate SQL table partitions by using partition keys or onto a separate SQL table. Use partitioning wherever possible; this limits the amount of reprocessing that needs to be done based on your extraction, transformation, and loading (ETL). For more information, see "Partition Design Best Practices" in OLAP Design Best Practices for Analysis Services 2005 on Microsoft TechNet.

Use ProcessData and ProcessIndex instead of ProcessFull

The ProcessFull enumeration type executes the ProcessData and ProcessIndexes jobs. For more information on these process enumeration types, see the "Dimension-Processing Commands" and "Partition-Processing Commands" sections in the SQL Server Analysis Services 2005 Performance Guide.
Breakdown of processing enumerations
While testing, the Microsoft SQL Server Development Customer Advisory Team (SQLCAT) noticed that the processing against the relational database completes faster and Analysis Services consumes fewer memory resources when the ProcessData and ProcessIndexes enumerations are used as compared to using ProcessFull. Memory considerations are especially important on 32-bit systems because of their lower memory consumption. As you can see from the following table, there are noticeable improvements in processing time.
Process enumeration
Processing time
Two-step process
00:12:23
   ProcessData
00:10:18
   ProcessIndexes
00:02:05
ProcessFull
00:13:34
To change the processing in the XMLA script, you need only alter the <Process><Type> </Type></Process> attribute as shown in the following code example. XML namespaces have been removed from the code for clarity.
<Batch xmlns="...">
  <Parallel>
    <Process xmlns:xsd="..." xmlns:xsi="..." 
        xmlns:ddl2="..." xmlns:ddl2_2="...">
      <Object>
        <DatabaseID>Adventure Works DW</DatabaseID>
        <CubeID>Adventure Works DW</CubeID>
      </Object>
      <Type>ProcessData</Type> --- or ProcessIndexes
      <WriteBackTableCreation>UseExisting</WriteBackTableCreation>
    </Process>
  </Parallel>
</Batch>
For more information on the available processing enumerations, see SQL Server  Books Online, particularly the Type Element topic.
CPU usage patterns are different between the enumerations
The apparent reason for this difference is that the CPU usage patterns between the two different actions are quite different. See the performance monitor output in the next three figures (the % Processor Time is highlighted in the figures). As you can see, not only is there faster processing time by using ProcessData and ProcessIndexes, but there are also differences in CPU utilization as well (ProcessFull: 64.021, ProcessData: 47.847, ProcessIndexes: 54.353).
Cc966525.ASPBPR04(en-us,TechNet.10).gif
Figure 4   CPU usage pattern: ProcessFull, average 64.021
Cc966525.ASPBPR05(en-us,TechNet.10).gif
Figure 5   CPU usage pattern: ProcessData, average 47.847
Cc966525.ASPBPR06(en-us,TechNet.10).gif
Figure 6   CPU usage pattern: ProcessIndexes, average 54.353
As you can see from Figures 5 and 6, the ProcessData and ProcessIndexes enumeration behave differently from ProcessFull. For some customers, this difference resulted in less processing time against their relational data source.
To investigate the scalability of your system, process using ProcessData with one partition and MaxParallel=1. Then measure two partitions with MaxParallel=2, ..., and so forth. Measure and compare the impact (memory utilized, CPU usage, disk I/O, and so on) to determine optimal performance.

Parallel Processing Best Practices

Here are a number of techniques for optimizing parallel processing.

Do not always use the UI default for parallel processing

By default, when Analysis Services processes the selected cube, measure group, or partitions, the server decides how to process the object(s) in parallel. However, letting the server decide how many threads to process in parallel may not always be optimal. For example, for the [Adventure Works DW] sample OLAP database on a 4-GB RAM, four-processor server running Analysis Services SP2, the comparable processing times in seconds are:
Parallel processing setting
Process time (ss.sss)
Letting the server decide
50.990
Manually setting parallel to four tasks
47.347
To change the parallel processing setting from within the UI, from the Analysis Services Processing dialog box, click the Change Settings button to open up the Change Settings dialog box shown in Figure 7. From the Maximum parallel tasks list, select an appropriate value for the number of parallel tasks. The next section tells you how to determine the appropriate value for your system.
Cc966525.ASPBPR07(en-us,TechNet.10).gif
Figure 7   Analysis Services Change Settings dialog box

Set the appropriate values for parallel processing

Generally, the appropriate number of parallel tasks is 1.5 – 2 times the number of CPUs; exceeding this value may cause competition for resources. Always test the system to see how well parallelism will work for you. Since you have limited resources, a plausible option could also be to use less parallelism instead of more.
The ascmd.exe command-line utility is a handy tool for testing performance characteristics to determine the appropriate values for parallel processing. This utility is similar to the sqlcmd.exe utility that you use to execute SQL statements in SQL Server. The ascmd.exe utility executes XMLA commands against an Analysis Services server.
The ascmd.exe utility is included in SQL Server Samples, available at SQL Server 2005 Samples and Sample Databases on the Microsoft Download Center. After you install the samples, you can find ascmd.exe in the c:\Program Files\Microsoft SQL Server\90\Samples\Analysis Services\Administrator\ascmd folder. For more information on the ascmd command-line utility, see the Readme For Ascmd Command-line Utility Sample on MSDN.
Following is an example XMLA script to process the [Adventure Works DW] cube; XMLA name spaces have been removed for clarity.
<Batch xmlns="...">
  <Parallel MaxParallel="8">
    <Process xmlns:xsd="..." xmlns:xsi="..." 
        xmlns:ddl2="..." xmlns:ddl2_2="...">
      <Object>
        <DatabaseID>Adventure Works DW</DatabaseID>
        <CubeID>Adventure Works DW</CubeID>
      </Object>
      <Type>ProcessFull</Type>
      <WriteBackTableCreation>UseExisting</WriteBackTableCreation>
    </Process>
  </Parallel>
</Batch>
By using ascmd.exe and XMLA, you can adjust the XMLA MaxParallel parameter to try out different values. (The XMLA example set this to a value of 8; the previous tip, Do not always let the UI default for parallel processing, shows how to set this in the UI.) This setting specifies the maximum number of threads on which to run commands in parallel as in the Change Settings dialog box (accessed through the processing UI) in Figure 7.
Using the ascmd.exe utility and four different XMLA scripts, we were able to quickly automate and test four different parallel processing settings. Following are the results of this test. As you can see, for the particular machine configuration used in this test, it was beneficial to set the parallelism to 8 instead of letting the server decide.
MaxParallel settings
SQL/OLAP on
same server
SQL/OLAP on
different servers
Let the server decide (default)
00:12:29.966
00:07:43.060
Parallel = 4
00:17:05.000
00:08:07.953
Parallel = 6
00:12:57.023
00:07:37.023
Parallel = 8
00:11:28.983
00:07:31.933

Use the XMLA <Parallel> nodes to group processing tasks

In the processing tasks defined within your XMLA, create logical <Parallel></Parallel> sets to group processing tasks together so that the server uses the maximum resources. At the same time, get into the habit of processing the individual partitions (versus the entire measure group) using the <PartitionID></PartitionID> node to process less data at any one time.
Within the <Batch></Batch> element, if you set the Transaction attribute to true, all commands within the batch will be considered as a single transaction. If this attribute is set to false, every single processing command will be committed upon successful completion. This latter option is beneficial if you are processing different measure groups and/or objects in parallel.  As you can see in the following example, the XMLA is processing the [Fact Sales Summary] measure group and its individual partitions in parallel.
<Batch xmlns="..." Transaction="true">
  <Parallel>
    <Process xmlns:xsd="..." xmlns:xsi="..." ...="">
      <Object>
        <DatabaseID>Adventure Works DW</DatabaseID>
        <CubeID>Adventure Works DW</CubeID>
        <MeasureGroupID>Fact Sales Summary</MeasureGroupID>
        <PartitionID>Total_Sales_2001</PartitionID>
      </Object>
      <Type>ProcessFull</Type>
      <WriteBackTableCreation>UseExisting</WriteBackTableCreation>
    </Process>
    <Process xmlns:xsd="..." xmlns:xsi="..." ...="">
      <Object>
        <DatabaseID>Adventure Works DW</DatabaseID>
        <CubeID>Adventure Works DW</CubeID>
        <MeasureGroupID>Fact Sales Summary</MeasureGroupID>
        <PartitionID>Total_Sales_2002</PartitionID>
      </Object>
      <Type>ProcessFull</Type>
      <WriteBackTableCreation>UseExisting</WriteBackTableCreation>
    </Process>
    <Process xmlns:xsd="..." xmlns:xsi="..." ...="">
      <Object>
        <DatabaseID>Adventure Works DW</DatabaseID>
        <CubeID>Adventure Works DW</CubeID>
        <MeasureGroupID>Fact Sales Summary</MeasureGroupID>
        <PartitionID>Total_Sales_2003</PartitionID>
      </Object>
      <Type>ProcessFull</Type>
      <WriteBackTableCreation>UseExisting</WriteBackTableCreation>
    </Process>
    <Process xmlns:xsd="..." xmlns:xsi="..." ...="">
      <Object>
        <DatabaseID>Adventure Works DW</DatabaseID>
        <CubeID>Adventure Works DW</CubeID>
        <MeasureGroupID>Fact Sales Summary</MeasureGroupID>
        <PartitionID>Total_Sales_2004</PartitionID>
      </Object>
      <Type>ProcessFull</Type>
      <WriteBackTableCreation>UseExisting</WriteBackTableCreation>
    </Process>
  </Parallel>
</Batch>
For the sake of clarity, the xml namespaces in this example were removed. Note that the Transaction attribute has been set to true, meaning that the processing of the four [Fact Sales Summary] partitions will be committed as a single transaction. On a single-processor desktop computer, you may notice that the processing of this measure group will take 11s (seconds) when processing as a single measure group and 6s when processing as four individual partitions in parallel.

Troubleshooting Best Practices

These tips for troubleshooting processing will help you to understand your processing bottlenecks.

Profile your SQL statements to tune them

To help with tuning, you can use SQL Profiler to capture the SQL statements issued by Analysis Services so that you can perform your regular SQL tuning efforts (such as updating statistics, adding indexes, and so on). For example, processing the [Sales Summary] measure group in the [AdventureWorksDW] OLAP DB provides the SQL Profiler trace in Figure 8.
Cc966525.ASPBPR08(en-us,TechNet.10).gif
Figure 8   SQL Server Profiler trace for AdventureWorks processing
By selecting one of many the SQL statements issued, you can then run a query plan execution. In the case of the SQL statement selected in Figure 8, the total subtree cost was 2.285195.
Cc966525.ASPBPR09(en-us,TechNet.10).gif
Figure 9   Total subtree cost of 2.285195
In this particular case, the Sales Summary measure group is partitioned by the Order Date; the underlying SQL has a WHERE clause using the OrderDateKey. Adding a clustered index on the FactResellerSales table, OrderDateKey column would improve the performance of cube processing. The new query plan for the same SQL statement now has a total subtree cost of 1.142899 (Figure 10).
Cc966525.ASPBPR10(en-us,TechNet.10).gif
Figure 10   Total subtree cost 1.142899 with clustered indexes

Create a system-wide trace file

If you are having problems determining what is causing your processing bottlenecks, keep a system-wide processing trace file. Use it to determine which processing queries are running slower (or faster) than expected, as well as if they were to be executed at all. You can obtain this trace file by using SQL Profiler to record the information as described in the previous best practice, Profile your SQL statements to tune them.
Another handy way to do this is to run the following XMLA script (which is processing the Adventure Works cube), which outputs the trace to the c:\OLAP_Processing.trc file.
<Batch xmlns="http://schemas.microsoft.com/analysisservices/2003/engine" 
xmlns:soap="http://schemas.xmlsoap.org/soap/envelope/">
  <Create mlns="http://schemas.microsoft.com/analysisservices/2003/engine">
    <ObjectDefinition>
      <Trace>
        <ID>Analysis Services Processing Trace</ID>
        <Name>Analysis Services Processing Trace</Name>
        <LogFileName>C:\OLAP_Processing.trc</LogFileName>
        <LogFileAppend>1</LogFileAppend>
        <AutoRestart>1</AutoRestart>
        <LogFileSize>50000</LogFileSize>
        <LogFileRollover>1</LogFileRollover>
        <Events>
          <Event>
            <EventID>5</EventID>
            <Columns>
              <ColumnID>7</ColumnID>
              <ColumnID>15</ColumnID>
              <ColumnID>39</ColumnID>
              <ColumnID>8</ColumnID>
              <ColumnID>12</ColumnID>
              <ColumnID>28</ColumnID>
              <ColumnID>32</ColumnID>
              <ColumnID>40</ColumnID>
              <ColumnID>1</ColumnID>
              <ColumnID>13</ColumnID>
              <ColumnID>25</ColumnID>
              <ColumnID>33</ColumnID>
              <ColumnID>41</ColumnID>
              <ColumnID>2</ColumnID>
              <ColumnID>14</ColumnID>
              <ColumnID>42</ColumnID>
              <ColumnID>3</ColumnID>
              <ColumnID>11</ColumnID>
              <ColumnID>43</ColumnID>
            </Columns>
          </Event>
          <Event>
            <EventID>6</EventID>
            <Columns>
              <ColumnID>7</ColumnID>
              <ColumnID>15</ColumnID>
              <ColumnID>23</ColumnID>
              <ColumnID>39</ColumnID>
              <ColumnID>8</ColumnID>
              <ColumnID>24</ColumnID>
              <ColumnID>32</ColumnID>
              <ColumnID>40</ColumnID>
              <ColumnID>1</ColumnID>
              <ColumnID>9</ColumnID>
              <ColumnID>25</ColumnID>
              <ColumnID>33</ColumnID>
              <ColumnID>41</ColumnID>
              <ColumnID>2</ColumnID>
              <ColumnID>6</ColumnID>
              <ColumnID>10</ColumnID>
              <ColumnID>14</ColumnID>
              <ColumnID>22</ColumnID>
              <ColumnID>42</ColumnID>
              <ColumnID>3</ColumnID>
              <ColumnID>11</ColumnID>
              <ColumnID>43</ColumnID>
              <ColumnID>4</ColumnID>
              <ColumnID>12</ColumnID>
              <ColumnID>28</ColumnID>
              <ColumnID>5</ColumnID>
              <ColumnID>13</ColumnID>
            </Columns>
          </Event>
          <Event>
            <EventID>8</EventID>
            <Columns>
              <ColumnID>7</ColumnID>
              <ColumnID>15</ColumnID>
              <ColumnID>39</ColumnID>
              <ColumnID>8</ColumnID>
              <ColumnID>24</ColumnID>
              <ColumnID>1</ColumnID>
              <ColumnID>5</ColumnID>
              <ColumnID>9</ColumnID>
              <ColumnID>13</ColumnID>
              <ColumnID>25</ColumnID>
              <ColumnID>41</ColumnID>
              <ColumnID>2</ColumnID>
              <ColumnID>10</ColumnID>
              <ColumnID>14</ColumnID>
              <ColumnID>22</ColumnID>
              <ColumnID>42</ColumnID>
              <ColumnID>3</ColumnID>
              <ColumnID>11</ColumnID>
              <ColumnID>43</ColumnID>
              <ColumnID>4</ColumnID>
              <ColumnID>12</ColumnID>
              <ColumnID>28</ColumnID>
            </Columns>
          </Event>
        </Events>
      </Trace>
    </ObjectDefinition>
  </Create>
  <Parallel>
    <Process xmlns:xsd="http://www.w3.org/2001/XMLSchema" 
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" 
xmlns:ddl2="http://schemas.microsoft.com/analysisservices/2003/engine/2" 
xmlns:ddl2_2="http://schemas.microsoft.com/analysisservices/2003/engine/2/2">
      <Object>
        <DatabaseID>Adventure Works DW</DatabaseID>
        <CubeID>Adventure Works DW</CubeID>
      </Object>
      <Type>ProcessFull</Type>
      <WriteBackTableCreation>UseExisting</WriteBackTableCreation>
    </Process>
  </Parallel>
</Batch>
 
<Delete xmlns="http://schemas.microsoft.com/analysisservices/2003/engine"
xmlns:soap="http://schemas.xmlsoap.org/soap/envelope/">
  <Object>
    <TraceID>Analysis Services Processing Trace</TraceID>
  </Object>
</Delete>

Upgrade to Analysis Services SP2 for NUMA support

A big improvement to processing with SP2 is Analysis Services native support of non-uniform memory access (NUMA) hardware. For more information on NUMA, see Understanding Non-uniform Memory Access in SQL Server Books Online.

Prevent long-running queries from interfering with processing

Long-running queries that are executed concurrently with processing can prevent processing from completing. In order to commit the processing activity, Analysis Services takes an exclusive lock on the database. But, a long running-query will block this lock request and processing must wait for the queries to finish before processing can complete. If you cannot create an exclusive processing window, consider architecting a querying/processing Analysis Services server architecture as in Figure 11. The Analysis Services processing server exclusively processes data; upon completion, it synchronizes with the Analysis Services querying server, which solely renders MDX queries.
Figure 11   Querying / Processing AS server architecture
Figure 11   Querying / Processing AS server architecture

Yes, a SAN helps

Similar to SQL Server processing, Analysis Services processing executes faster on a Storage Area Network (SAN). Ensuring that the SAN (spindles, LUNs, and so on) and HBA (drivers, throughput, and so on) are properly configured, enables enterprise Analysis Services projects to process faster because they are not slowed down by suboptimal disk activity. For more information, see SQL Server I/O Predeployment Best Practices on TechNet.