Windows console encoding

Filenames on NTFS are encoded in UTF-16.  The windows console is set by default to some other encoding entirely.  This makes working with files with ‘special’ characters in the filenames impossible…

In my case, I was using the following common code to delete files and folders in a directory:

set folder=”C:\test”

cd /d %folder%

for /F “delims=” %%i in (‘dir /b’) do (rmdir “%%i” /s/q || del “%%i” /s/q)


But files with certain unicode characters were not being deleted.  To fix this, add the following at the top of the file:

chcp 10000


This changes the encoding to UTF-16.

Or if you’re using cmd and the dir command, change the font first to Lucida Console (as the default font has a very limited character set).

Sharepoint 2013 Provider-Hosted App Architecture Notes

Trying to build a Sharepoint 2013 app has probably been the worst experience of my coding life so far.

The Microsoft docs make it sound so easy; there are so many ways you can build an app!  You can use any programming language you like!  Look, we have a REST interface!  Look, mobile app APIs!

Hey awesome,  you think, looking through the initial introductory documentation, yeh all the different information is a bit confusing, but look, they have how tos and the APIs are documented properly, how hard could it be?

Well, after wasting A LOT of time following guides and trying to build solutions that work, here’s some information that happened to be crucial to the architectural decision making of the apps that I didn’t come across until much too late.  Probably it’s wrong, because I’m finding it extremely difficult to get actual facts about the different ways you can build sharepoint apps, despite the millions of confusing articles on the Microsoft site (none of which seem to contain all the information you need to know), and lots of tutorials (written only by people coding in ASP hosting their sites on Azure or using OAuth).


Provider-hosted apps using the REST API:

  • You can either use the javascript cross-domain library or use OAuth
  • Using OAuth requires an account with Azure AD and you also need to configure your Sharepoint installation to use Azure AD (and obviously the Sharepoint installation needs access through firewalls etc to communicate to Azure AD).  In addition, the app needs to be registered in Azure.
  • I’ve seen some tutorials that say for testing you just need to register the app in SP and not Azure, and that you don’t need the Azure AD in this case; I couldn’t get this to work.

Provider-hosted apps using high trust:

  • The how-to guides all use a couple of Microsoft provided C# files for the authentication, in addition to Windows Authentication for the site in IIS, and I can’t see any documentation on how the process actually works.  Reading through the files, they get the Windows user information, so I have a feeling this method can only be used for apps built (1) in ASP/C# running on a windows machine, and (2) in the same domain as the sharepoint installation.


So if you want to build an app that can modify sharepoint data in any non-Microsoft language, and host it on a non-Windows machine, and don’t want to pay for an Azure subscription, and don’t want to change the authentication method of your sharepoint site, your options are:

  1. Javascript frontend to deal with Sharepoint, plus likely a backend of whatever to do anything you can’t with javascript (use 3rd party APIs etc)
  2. A high trust app to act as a proxy between your app and the sharepoint installation*

*I’m still trying to figure out how it would be possible to send the REST request I want to make to sharepoint to the proxy instead, and have that sign it and forward it on to sharepoint…

Postfix queue management bash scripts

Couple of scripts I used while cleaning up a mail server. I’m sure they can be improved, and the last one is quite specific to my own requirements, but I’ll put them here anyway.

Move emails with a particular subject from the hold queue to the deferred queue:

#change directory to postfix's queue directory#
cd $(postconf -h queue_directory)/hold
#loop over queue files
for i in * ; do
# postcat e file, grep for subject "test" and if found
# run postsuper -d to delete queue'd message
postcat $i |grep -q '^Subject: test' && postsuper -H $i

Delete emails in the hold queue that are being sent to a recipient that has already recieved an email (is in the mail log) or duplicate emails (with the same email/subject):

cd $(postconf -h queue_directory)/hold
#loop over queue files
for i in * ; do
   if [ -f "$i" ]; then
       IDENT=$(postcat $i | grep -A 1 "To:")
       RECIPIENT=$(postcat $i | grep "To:" | cut -c 5- )
       if grep -q "$RECIPIENT" /root/postfixtmp/logs/mailsent.log; then
           echo "* already sent to $RECIPIENT, deleting $i " | tee -a /root/postfixtmp/queueclean.log
           echo $IDENT | tee -a /root/postfixtmp/queueclean.log
           NUM=$[NUM + 1]
           postsuper -d $i
           echo "----" | tee -a /root/postfixtmp/queueclean.log
           for o in * ; do
              if [ -f "$o" ]; then
                  if [ $o != $i ]; then
                     CURRENT=$(postcat $o | grep -A 1 "To:")
                     if [ "$CURRENT" = "$IDENT" ]; then
                        echo " duplicate email, deleting $o *" | tee -a /root/postfixtmp/queueclean.log
                        echo $CURRENT | tee -a /root/postfixtmp/queueclean.log
                        NUM=$[NUM + 1]
                        postsuper -d $o
                        echo "----" | tee -a /root/postfixtmp/queueclean.log
echo "Deleted $NUM emails" | tee -a /root/postfixtmp/queueclean.log

Recovering VMs that were on local storage after removing host from XenServer pool

When you remove a host from a XenServer pool, the host gets reinitialized, so any VMs running locally get lost.  Luckily, it’s not too hard to recover the vdis from lvm.  Here’s an outline of the steps with some links that have more info / specific commands.

  1. If you can, join the host back to the pool and connect to your shared storage; this way you get the vms (that were moved to the pool when you added the host) and the vdis and only have to match the two together at the end
  2. Navigate to /etc/lvm/backup and find the file with the previous lvm data (the logical volumes should have all of your old vdis / snapshots, and it should have the relevant device path eg /dev/sda3)
  3. Find the current physical volume uuid
  4. Backup the /etc/lvm directory
  5. Modify the old volume group file and replace the old physical volume uuid with the current one
  6. Detach the local storage SR from the XenServer (see link below)
  7. Use vgcfgrestore to restore the old volume group file
  8. If you do vgscan you should see the newer volume group replaced with the old one (the name will be the same as the old one)
  9. Attach local storage SR to the XenServer with the current volume group name
  10. Create a new pbd with the SCSI ID and plug it in (see link below)
  11. Scan the new SR, it should pick up the old vdis but without any meta data.  If you create a new VM and attach these one-by-one as secondary disks, mount them to the new VM and check what they are, then you can rename them and attach back to your vms (that should be sitting in your pool).
  12. Move all the vdis you need over to your new SR, then you can remove your host again


Getting physical volume uuid and finding and modifying the file:
Removing SR:
Adding back local storage as an SR:

Adding mongo-10gen to apt-cacher (and Ubuntu)

On the server:

Add the following line to /etc/apt-cacher/apt-cacher.conf:
path_map = mongodb-10gen

Download the key and serve to clients (I rather add the key to the repo server and have clients download it from there, than connect out and get from the internet):
gpg –keyserver –recv-keys 7F0CEB10
gpg –armor –export 9958C967 >
python -m SimpleHTTPServer 8000


On client:

Create file /etc/apt/sources.list.d/10gen.list with the following contents:
deb http://your.apt-cacher.hostname:3142/mongodb-10gen dist 10gen

Download key from repo server:
wget http://your.apt-cacher.hostname:8000/
apt-key add
apt-get update

That should do it.  Then you can stop the python web server on the repo server.

Migrator Dragon for SharePoint 2013 fixing crash on ‘increase max upload file size on server’

When trying to upload files using this tool, the max upload size is 3MB (mentioned here:

To increase, you need to use this button on the tool, but it was crashing for me with the following error:

Description: The process was terminated due to an unhandled exception.
Exception Info: Microsoft.SharePoint.Administration.SPUpdatedConcurrencyException

In addition to this, I was getting lots of other errors from SharePoint:

The Execute method of job definition Microsoft.SharePoint.Diagnostics.SPDiagnosticsMetricsProvider (ID 7f18b8c7-49aa-45f2-8826-67ecff862c1a) threw an exception. More information is included below.

An update conflict has occurred, and you must re-try this action….

These two errors are linked, and the solution is described here: (although the details were slightly different on my installation, Win Server 2008 R2 and SP 2013)

To fix, you need to stop the SharePoint Timer Service, clear the configuration cache (at C:\ProgramData\Microsoft\SharePoint\Config\[guid], one folder has XML files, the other persistedfiles) by deleting all the XML files (not the folder, also the KB article mentions not to remove the cache.ini file and to edit it, but I didn’t have one), and restart the SharePoint Timer Service.  The article also mentions to run a config refresh from SP admin, but I couldn’t find this, so didn’t do it, and the fix worked anyway.  Might need to restart IIS as well.


(Also note, I think the max you can set the button is the value that you set for the application max file upload.  SharePoint 2013 has a hard limit of 2047 MB, so you can put this value in both the SharePoint web application settings and Migrator Dragon and you’ll be able to upload large files up to 2GB.  To change in SP, Central Administration > Manage Web Applications > select your application and go to General Settings >  Maximum upload size)

Moving MS SQL 2008 database location

You cannot change the installation location (so master etc databases), but client databases can be moved like so:

First take the databases offline and move the mdf and ldf files to the new location, then do the following commands:

   Name = “db_name”,
   Filename = ‘Q:\sqldata\db_name.mdf’
   Name = “db_name_log”,
   Filename = ‘Q:\sqldata\db_name_log.LDF’


Taken from:

Notes on fixing XenServer VDIs

If get the VDI is not available when scanning SR

xe sr-scan uuid=[uuid of the SR]

Should give you a more verbose error message, also
Should give an even more verbose error, eg VDI header error

Get the VDI uuid
xe vdi-list

Forget the VDI and re-scan the SR
xe vdi-forget uuid=[vdi uuid]
xe sr-scan uuid=[sr uuid]

Might fix the issue, otherwise, if need to preserve the data, need to restart the host
If can trash the data, can try to delete the vdi

xe vdi-destroy uuid=[vdi uuid]

Might not work, if not, try restart.  May also need to manually remove the LVM volume

lvremove /dev/VG_XenStorage-[uuid of sr]/VHD-[uuid of vdi]

Then restart the machine or Xen is confused by missing volume

If really need the data, there might be a way to fix broken headers/footers.  If the data on the VDI has issues, though, create a new VDI (larger than original), use dd to copy data from broken VDI to new one, mount new one and use a recovery tool on the VM to recover data.

Instructions taken from post on Citrix Forums by Fabian Baena:

xe vdi-create sr-uuid=bc4c43f3-1321-2b17-bef0-3b58686a8075 name-label=copy virtual-size=210130436096

the sr-uuid is the storage where you want to put the copy. Take note of the uuid that comes up after you execute the command

get the uuid of your xenserver control domain by doing
xe vm-list name-label=Control\ domain\ on\ host:\ <name of your xenserver host> params=uuid

then create the vbd
xe vbd-create vm-uuid=<;vm-uuid you got from the previous command>; vdi-uuid=&lt;vdi uuid you got from the vdi-create command> device=0

plug the vbd
xe vbd-plug uuid=<vbd uuid you got from the previous command>;

then do the copy

dd if=/dev/mapper/VG_XenStorage–bc4c43f3–1321–2b17–bef0–3b58686a8075-VHD–b079b55a–6679–47a1–b2a2–d207a476494e of=/dev/xvda

The copy will take several minutes. When finished unplug the vbd

xe vbd-unplug uuid=<vbd uuid you got from the vbd-create command>
xe vbd-destroy uuid=<vbd uuid you got from the vbd-create command>

Afterwards, you can connect the new vdi to a vm and see if you can recover anything.