Author Archives: admin

Sharepoint 2013 Provider-Hosted App Architecture Notes

Trying to build a Sharepoint 2013 app has probably been the worst experience of my coding life so far.

The Microsoft docs make it sound so easy; there are so many ways you can build an app!  You can use any programming language you like!  Look, we have a REST interface!  Look, mobile app APIs!

Hey awesome,  you think, looking through the initial introductory documentation, yeh all the different information is a bit confusing, but look, they have how tos and the APIs are documented properly, how hard could it be?

Well, after wasting A LOT of time following guides and trying to build solutions that work, here’s some information that happened to be crucial to the architectural decision making of the apps that I didn’t come across until much too late.  Probably it’s wrong, because I’m finding it extremely difficult to get actual facts about the different ways you can build sharepoint apps, despite the millions of confusing articles on the Microsoft site (none of which seem to contain all the information you need to know), and lots of tutorials (written only by people coding in ASP hosting their sites on Azure or using OAuth).

 

Provider-hosted apps using the REST API:

  • You can either use the javascript cross-domain library or use OAuth
  • Using OAuth requires an account with Azure AD and you also need to configure your Sharepoint installation to use Azure AD (and obviously the Sharepoint installation needs access through firewalls etc to communicate to Azure AD).  In addition, the app needs to be registered in Azure.
  • I’ve seen some tutorials that say for testing you just need to register the app in SP and not Azure, and that you don’t need the Azure AD in this case; I couldn’t get this to work.

Provider-hosted apps using high trust:

  • The how-to guides all use a couple of Microsoft provided C# files for the authentication, in addition to Windows Authentication for the site in IIS, and I can’t see any documentation on how the process actually works.  Reading through the files, they get the Windows user information, so I have a feeling this method can only be used for apps built (1) in ASP/C# running on a windows machine, and (2) in the same domain as the sharepoint installation.

 

So if you want to build an app that can modify sharepoint data in any non-Microsoft language, and host it on a non-Windows machine, and don’t want to pay for an Azure subscription, and don’t want to change the authentication method of your sharepoint site, your options are:

  1. Javascript frontend to deal with Sharepoint, plus likely a backend of whatever to do anything you can’t with javascript (use 3rd party APIs etc)
  2. A high trust app to act as a proxy between your app and the sharepoint installation*

*I’m still trying to figure out how it would be possible to send the REST request I want to make to sharepoint to the proxy instead, and have that sign it and forward it on to sharepoint…

Postfix queue management bash scripts

Couple of scripts I used while cleaning up a mail server. I’m sure they can be improved, and the last one is quite specific to my own requirements, but I’ll put them here anyway.

Move emails with a particular subject from the hold queue to the deferred queue:

#change directory to postfix's queue directory#
cd $(postconf -h queue_directory)/hold
#loop over queue files
for i in * ; do
# postcat e file, grep for subject "test" and if found
# run postsuper -d to delete queue'd message
postcat $i |grep -q '^Subject: test' && postsuper -H $i
done

Delete emails in the hold queue that are being sent to a recipient that has already recieved an email (is in the mail log) or duplicate emails (with the same email/subject):

cd $(postconf -h queue_directory)/hold
#loop over queue files
NUM=0
for i in * ; do
   if [ -f "$i" ]; then
       IDENT=$(postcat $i | grep -A 1 "To:")
       RECIPIENT=$(postcat $i | grep "To:" | cut -c 5- )
       if grep -q "$RECIPIENT" /root/postfixtmp/logs/mailsent.log; then
           echo "* already sent to $RECIPIENT, deleting $i " | tee -a /root/postfixtmp/queueclean.log
           echo $IDENT | tee -a /root/postfixtmp/queueclean.log
           NUM=$[NUM + 1]
           postsuper -d $i
           echo "----" | tee -a /root/postfixtmp/queueclean.log
       else
           for o in * ; do
              if [ -f "$o" ]; then
                  if [ $o != $i ]; then
                     CURRENT=$(postcat $o | grep -A 1 "To:")
                     if [ "$CURRENT" = "$IDENT" ]; then
                        echo " duplicate email, deleting $o *" | tee -a /root/postfixtmp/queueclean.log
                        echo $CURRENT | tee -a /root/postfixtmp/queueclean.log
                        NUM=$[NUM + 1]
                        postsuper -d $o
                        echo "----" | tee -a /root/postfixtmp/queueclean.log
                     fi
                  fi
              fi
           done
      fi
   fi
done
echo "Deleted $NUM emails" | tee -a /root/postfixtmp/queueclean.log

Recovering VMs that were on local storage after removing host from XenServer pool

When you remove a host from a XenServer pool, the host gets reinitialized, so any VMs running locally get lost.  Luckily, it’s not too hard to recover the vdis from lvm.  Here’s an outline of the steps with some links that have more info / specific commands.

  1. If you can, join the host back to the pool and connect to your shared storage; this way you get the vms (that were moved to the pool when you added the host) and the vdis and only have to match the two together at the end
  2. Navigate to /etc/lvm/backup and find the file with the previous lvm data (the logical volumes should have all of your old vdis / snapshots, and it should have the relevant device path eg /dev/sda3)
  3. Find the current physical volume uuid
  4. Backup the /etc/lvm directory
  5. Modify the old volume group file and replace the old physical volume uuid with the current one
  6. Detach the local storage SR from the XenServer (see link below)
  7. Use vgcfgrestore to restore the old volume group file
  8. If you do vgscan you should see the newer volume group replaced with the old one (the name will be the same as the old one)
  9. Attach local storage SR to the XenServer with the current volume group name
  10. Create a new pbd with the SCSI ID and plug it in (see link below)
  11. Scan the new SR, it should pick up the old vdis but without any meta data.  If you create a new VM and attach these one-by-one as secondary disks, mount them to the new VM and check what they are, then you can rename them and attach back to your vms (that should be sitting in your pool).
  12. Move all the vdis you need over to your new SR, then you can remove your host again

 

Resources
Getting physical volume uuid and finding and modifying the file: http://support.citrix.com/article/CTX128097
Removing SR: http://support.citrix.com/article/CTX131328
Adding back local storage as an SR: http://support.citrix.com/article/CTX121896

Adding mongo-10gen to apt-cacher (and Ubuntu)

On the server:

Add the following line to /etc/apt-cacher/apt-cacher.conf:
path_map = mongodb-10gen http://downloads-distro.mongodb.org/repo/ubuntu-upstart

Download the key and serve to clients (I rather add the key to the repo server and have clients download it from there, than connect out and get from the internet):
gpg –keyserver keyserver.ubuntu.com –recv-keys 7F0CEB10
gpg –armor –export 9958C967 > mongodb-10gen.pub
python -m SimpleHTTPServer 8000

 

On client:

Create file /etc/apt/sources.list.d/10gen.list with the following contents:
deb http://your.apt-cacher.hostname:3142/mongodb-10gen dist 10gen

Download key from repo server:
wget http://your.apt-cacher.hostname:8000/mongodb-10gen.pub
apt-key add mongodb-10gen.pub
apt-get update

That should do it.  Then you can stop the python web server on the repo server.

Migrator Dragon for SharePoint 2013 fixing crash on ‘increase max upload file size on server’

When trying to upload files using this tool, the max upload size is 3MB (mentioned here: http://gallery.technet.microsoft.com/office/The-Migration-Dragon-for-c0880e59#content)

To increase, you need to use this button on the tool, but it was crashing for me with the following error:

Description: The process was terminated due to an unhandled exception.
Exception Info: Microsoft.SharePoint.Administration.SPUpdatedConcurrencyException

In addition to this, I was getting lots of other errors from SharePoint:

The Execute method of job definition Microsoft.SharePoint.Diagnostics.SPDiagnosticsMetricsProvider (ID 7f18b8c7-49aa-45f2-8826-67ecff862c1a) threw an exception. More information is included below.

An update conflict has occurred, and you must re-try this action….

These two errors are linked, and the solution is described here: http://support.microsoft.com/kb/939308 (although the details were slightly different on my installation, Win Server 2008 R2 and SP 2013)

To fix, you need to stop the SharePoint Timer Service, clear the configuration cache (at C:\ProgramData\Microsoft\SharePoint\Config\[guid], one folder has XML files, the other persistedfiles) by deleting all the XML files (not the folder, also the KB article mentions not to remove the cache.ini file and to edit it, but I didn’t have one), and restart the SharePoint Timer Service.  The article also mentions to run a config refresh from SP admin, but I couldn’t find this, so didn’t do it, and the fix worked anyway.  Might need to restart IIS as well.

 

(Also note, I think the max you can set the button is the value that you set for the application max file upload.  SharePoint 2013 has a hard limit of 2047 MB, so you can put this value in both the SharePoint web application settings and Migrator Dragon and you’ll be able to upload large files up to 2GB.  To change in SP, Central Administration > Manage Web Applications > select your application and go to General Settings >  Maximum upload size)

Moving MS SQL 2008 database location

You cannot change the installation location (so master etc databases), but client databases can be moved like so:

First take the databases offline and move the mdf and ldf files to the new location, then do the following commands:

ALTER DATABASE “db_name” SET SINGLE_USER WITH ROLLBACK IMMEDIATE;
ALTER DATABASE “db_name” SET OFFLINE;
ALTER DATABASE “db_name” MODIFY FILE
(
   Name = “db_name”,
   Filename = ‘Q:\sqldata\db_name.mdf’
);
ALTER DATABASE “db_name” MODIFY FILE
(
   Name = “db_name_log”,
   Filename = ‘Q:\sqldata\db_name_log.LDF’
);
ALTER DATABASE “db_name” SET ONLINE;
ALTER DATABASE “db_name” SET MULTI_USER;

 

Taken from: http://stackoverflow.com/questions/6584938/move-sql-server-2008-database-files-to-a-new-folder-location

Notes on fixing XenServer VDIs

If get the VDI is not available when scanning SR

xe sr-scan uuid=[uuid of the SR]

Should give you a more verbose error message, also
/var/log/SMlog
Should give an even more verbose error, eg VDI header error

Get the VDI uuid
xe vdi-list

Forget the VDI and re-scan the SR
xe vdi-forget uuid=[vdi uuid]
xe sr-scan uuid=[sr uuid]

Might fix the issue, otherwise, if need to preserve the data, need to restart the host
If can trash the data, can try to delete the vdi

xe vdi-destroy uuid=[vdi uuid]

Might not work, if not, try restart.  May also need to manually remove the LVM volume

lvremove /dev/VG_XenStorage-[uuid of sr]/VHD-[uuid of vdi]

Then restart the machine or Xen is confused by missing volume

If really need the data, there might be a way to fix broken headers/footers.  If the data on the VDI has issues, though, create a new VDI (larger than original), use dd to copy data from broken VDI to new one, mount new one and use a recovery tool on the VM to recover data.

Instructions taken from post on Citrix Forums by Fabian Baena:

xe vdi-create sr-uuid=bc4c43f3-1321-2b17-bef0-3b58686a8075 name-label=copy virtual-size=210130436096

the sr-uuid is the storage where you want to put the copy. Take note of the uuid that comes up after you execute the command

get the uuid of your xenserver control domain by doing
xe vm-list name-label=Control\ domain\ on\ host:\ <name of your xenserver host> params=uuid

then create the vbd
xe vbd-create vm-uuid=<;vm-uuid you got from the previous command>; vdi-uuid=&lt;vdi uuid you got from the vdi-create command> device=0

plug the vbd
xe vbd-plug uuid=<vbd uuid you got from the previous command>;

then do the copy

dd if=/dev/mapper/VG_XenStorage–bc4c43f3–1321–2b17–bef0–3b58686a8075-VHD–b079b55a–6679–47a1–b2a2–d207a476494e of=/dev/xvda

The copy will take several minutes. When finished unplug the vbd

xe vbd-unplug uuid=<vbd uuid you got from the vbd-create command>
xe vbd-destroy uuid=<vbd uuid you got from the vbd-create command>

Afterwards, you can connect the new vdi to a vm and see if you can recover anything.

Getting pypicache running on Ubuntu 10.04

Pypicache is a great way to host a local pypi repository.  Unfortunately, it took some time for me to get it working under Ubuntu 10.04.

Pypicache is written for 2.7+, and ubuntu 10.04 uses 2.6.  Lucikly, the only backwards incompatibility seems to be string formatting.  Sooooo, get a copy of pypicache source and fix all the string formatting in the py files under the pypicache directory (alternatively, clone this: https://github.com/demelziraptor/pypicache – might be out of date so check first)

Then, while in the directory with your copy of pypicache, run pip install -r requirements.txt –use-mirrors
(Or ‘make init’ if you don’t mind it downloading all the dev requirements too.)

Then ‘make runserver’ to run the server in debug mode, with the target directory /tmp/pypicache

Test the server runs ok and you can use for whatever you want to use it for (in my case, a pip proxy).  Then you can run it using ‘PYTHONPATH=. python -m pypicache.main /tmp/mypackages’