Quantcast
Channel: Brian Smith's Microsoft Planner and Project Support Blog
Viewing all 200 articles
Browse latest View live

Microsoft Planner: Linking Plans to a Project task

$
0
0

For those on the Office Insider builds – or First Release – you will already be seeing a new link in the ribbon for Tasks in Project Online Desktop Client.  This is a feature only available for the subscription client – so don’t expect to see this in Project Professional 2016 if you own the perpetual license.  The new link enables you to link a task in Project Online to a Plan in Microsoft Planner, assuming you have an Office 365 subscription that includes Planner.  Let’s take a look and think about how you might use this.

You can see the link on the upper right – and clicking opens up the guide on the left.

clip_image002

Clicking the link gives a selection box for the Office 365 Group (not the Plan – as a Group can have more than one Plan – from Teams for example) and starting to type will find Groups that match the entered string

clip_image002[5] 

Selecting the plan then tells you who will get added to the Group – if they are not already there – and these will be  the assigned user and current user.  The project and the task need to have been published at this point to be able to make the link (You will get a message if they are not).

The right hand screenshot below shows the result when a Group is chosen that has more than one plan – and also in this case is adding the assigned resource (Sara) as well as me.

clip_image004               image

Once the link is made you get a link in the guide and a Planner icon in the grid.

clip_image006

You can have links to multiple plans – and clicking the link will take you to the plan – as you might expect:

clip_image008   image

But you can’t link two different task to the same plan – and the thinking here is you wouldn’t know which task the plan progress was being made against if you linked one plan to two tasks:

clip_image010

In this initial release it is purely a link – so it enables the project manager to offload the tracking of that task to someone else who will be managing it in Planner.  The PM can then see how progress is going and update the project task accordingly after reviewing the progress of tasks and possibly checklists in the plan.  We are keen to hear what ways you might consider using this and future features you would want to see.  Use either the feedback option under the smiley top right in Project – or the Project UserVoice site.  If your feedback is more about the Planner end of the equation – then Planner has a UserVoice too!


Microsoft Planner: Considerations for Reporting–Part 4 of 3

$
0
0

As I mentioned in Part 3 – there are a few other things I wanted to cover, so here is part 4.  If you didn’t see the other parts – then here are the links – so come back when you’ve read them.

The aim of the series wasn’t to provide a finished answer to every possible reporting scenario, but really to expose you to the data model and how you can pull the data, and along with this make you think about really what you need and how you might use and govern the usage of planner to achieve that goal.  And even in thinking through that decide if Planner is really the right tool?  If you wanted reports that show resource capacity, usage and actual work then maybe Project Online is where you need to be?  The recent release of the link capability from Project Online to Planner opens up new opportunities to use the products together (as do other 3rd party offerings) If Planner is the right place then you need to consider how you ensure your reporting is valid – so some form of templating and filtering to make your reports consistent will certainly be necessary.

Another aim for me was to look at some different technologies along the way – so I touched on Flow, Azure Functions and Cosmos DB – as well as Power BI which I will come back to later in this posting.  I’m really sold on Flow, and I’ll be doing some other postings soon on different scenarios around Planner and Project Online.  I’ll also be looking closer at Logic Apps in Azure – very similar in capabilities to Flow and PowerApps.  Flow and Azure Functions make a great combination and I’ll also be looking at C# Functions and see what extra performance they give (as well as potentially reduced cost if they use fewer resources than my PowerShell examples – we will see).  But that is for another day… I initially prepared the topic of this blog as a technical session for Microsoft’s internal training event Ready, and re-worked some of it for the blog, and if I was starting again I’d probably re-work even more. 

I’m not sure Azure Cosmos DB is the right data store for this kind of reporting solution but I enjoyed learning about its capabilities and would certainly look to this as a back-end for any application development I might do – as it has some great capabilities.  If I was using Cosmos DB I’d also use a different approach and create a document for each plan – and include the plan details, tasks, task details and assignments inside the document.  My approach showed my background in relational SQL databases and trying to make collections = tables. 

For those keen to learn more I ‘discovered’ Cosmos DB  (under the previous DocumentDB name) while working through the Microsoft Professional Program for Big Data

You can ‘audit’ the edX courses for free – some great sessions in this program as well as the Data Science track.

I more recently used the Lynda resource Cosmos DB Developer Deep Dive - https://www.lynda.com/Azure-tutorials/Cosmos-DB-Developer-Deep-Dive/612187-2.html.

If you do play around with Cosmos DB then I’d certainly suggest the resources above – and be sure to set the Throughput (RU/s) to 400 for starters so you don’t get any billing surprises.  You can set this when creating the database and also when creating collections.  Also be sure you clean things up when you aren’t using it – either drop the collections (an empty DB does not consume much) or delete the DB.  I ended up using the new Azure Cloud Shell and a couple of scripts to do the creation/deletion of my collections.  The Geo capabilities of Cosmos DB are really easy to set up – you can add read copies of your database in other geographies – and even fail over to make the other region the write region.

image

This can be a very useful if for example you were developing a solution in the US that would eventually be rolled out and have most users in Europe – you could start your work on a local write region in the US, then add a Europe read region and fail-over (and potentially delete the US read region) once you went into production.  If you are just kicking the tyres however, you will want to remove additional regions once you have finished playing with them – as otherwise you will get billed for the other regions.

Another great source of Azure information is The Azure Podcast, and with Ignite 2017 coming to a close there are also plenty of recorded sessions to review.

I should also point out that you’d want to test that the Graph calls that I’ve used do what you need them to do, and that the account you use has all the permissions required and is finding the plans.  In my case they were all my plans created centrally – check to make sure you aren’t getting throttled or hitting any limits on plan or task ownership.  As I’ve mentioned in the earlier posts – wherever you are pushing the data you may want to trim out field/properties you aren’t interested in – or potentially add other metadata that adds value for your scenario,

The final part I wanted to touch on in this ‘Part 4’ of the trilogy was directly reporting on Cosmos DB from Power BI – which is in Beta right now.

You can connect to Cosmos DB using the Azure Cosmos DB (Beta) option in the Get Data panel – under Azure:

image

You will initially get a pop-up mentioning this is a preview connector – and I’ve seen it enough times so I’m checking the box to not show it again:

image

Next you get to enter the URL of your Cosmos DB, and optionally the Database and Collection.

I’m entering my URL and the Database then clicking OK and I’ll select the Collection later.  Optionally too you could enter a SQL statement to make your data selection.

image

I then am presented with my list of collections – and to start with I’m selecting Plans.  You can see that I just see one column of ‘records’ title Document.  Clicking Edit allows me to see what the data is and choose what I want to get.

image

Clicking Edit opens the query editor (If I had selected more than 1 collection I could have then selected query editor later) and to expand the records I can click the icon top right of the Document column.

image

image

I can select the ‘columns’ I want and note the warning message that List may not be complete.  If for example there was data that was not present in the first sampled rows you wouldn’t see them listed.  This isn’t an issue in this case – but expanding other columns later we will see when this might be an issue.  For now I am only interested in the createdBy, owner, title and id.

image

This will then display the data in the grid – and we can see our owner (actually the GUID of the Group that this Plan belongs to) and our title and the Plan ID – and that createdBy is also another record with additional data inside.

image

If we try to expand we can see that there are two more columns – user and application.  I’m only really interested in user but I’ll leave all checked just to show the behavior.

image

And we have more levels to go…

image

This just maps on the the json of the Plan – so looking in Cosmos DB Data Explorer and querying for the record that is the first in our list – Tech Summit 2017 – using the filter Select * FROM c where c.title = ‘Tech Summit 2017’ I can see that createdBy contains user and application – and each of those has a displayname and id.

image

As I’m only interested in the id of the user I deleted the application column, then expanded the user column but only selected id.

image

At this stage you will likely want to rename the columns – to at least drop the ‘Document’ and depending on the data you may need to change the type.  Generally all data from the json will be interpreted as text – so you will need to map to different number formats or possibly dates in some cases.  The usual time I remember this is when Power BI will only count things – and not show me the totals I expect!

image

I’m happy with the Plan feed – and don’t need to change types – so I can just choose Close and Apply.

image

Going through a similar set of steps for the Tasks collection and choosing the columns I am interested in shows that some of these are also their own records within the json – appliedCategories and assignments (I should really have unchecked the ‘use original column name as a prefix…)

image

Expanding the appliedCategories column I can see I have the categories – but am missing category4.

image

This is where ‘Load more’ is important.

image

Clicking ok expands out the categories – by adding 6 columns which contain TREU or null – indicating which categories are set for these tasks.  The actual names of the categories can be pulled from PlanDetails – but in this case as my plans were filtered to all be ones created from my template I know what my categories should be – and they are the same for all PlanIDs.  In this case I could probably just do sme kind of substitution – or that may have been a transform that could have been handled when the plan data was pulled before pushing onto my data store.  A choice you can make.

Here I’m using Power BI to replace the TRUE values in category1 with Marketing after first changing the column from Boolean to Text.

image

Moving on to Assignments – this is where things get a little messy – as they follow the same find of thing as the categories. but potentially there could be a lot more values – and having many columns with headers that are the GUID of the resource and then having to replace the ‘true’ with names would quickly get tedious.  Another case where transforming the data earlier makes sense.  And it was for this very reason that I reworked things to pull out a separate Assignments Collection!

Some data type changes to ensure my percent complete and counts of checkbox items  are integers and my dates are date/time and I’m good to go! 

Following similar steps I can get my assignments and members in and then link my plans to my tasks and tasks to my assignments and assignments to members.  For members I’m only interested in the id and displayName.

imageimage

Then with a bit of filtering it is easy to get a list of my assignments across all plans where the task has yet to be started.  It wouldn’t take much effort here to also change my column heading of percentComplete to Progress, and the values of 0 to Not started – to match up better to the UI of Planner.

image

Project Online: Setting Permissions in sites beyond PWA

$
0
0

Thanks goes to Paul Mather for most of the technical content of this particular post – as I’m taking a couple of his ideas and bringing them together.  I'm using PowerShell to set permissions in project sites and using Flow and Azure Functions to automate running that PowerShell.  I’ve also included a video that walks through this.  The original recording was for an internal presentation – so I’ve edited it down a bit.

First I’ll introduce the scenario.  With the recent announcement that Project Online can now support 30,000 projects this was in part achieved by now allowing sites for your projects to be created in site collections outside of the ‘PWA’ site collection.  So in my example I have /sites/pwa as my main Project Web App site collection – but I can create sites in /sites/pwasites.  The recommended limit for subsites in a site collection is 2000, and going beyond this does impact the performance of Project Online – and if you also have the SharePoint publishing features activated it can seriously impact every single page load in PWA.  Expect some guidance from the product group shortly that suggests having no subsites at all in your PWA site!  Limitations when you move sites to another site collection are that you no longer can synchronize permissions to the site based on the project stakeholders; you also cannot sync tasks to these sites – and you also must be in Project Permissions mode.  This article looks to address the first of those.

Here is the video – but I’ll also talk this through in words further down and have the sample code so you can try this for yourselves.

I broad terms I’m triggering a Flow on publish from Project Online, then writing some data to a Planner plan (unnecessary step – but shows how you might also write status during the process) and then hitting the Azure Function Url and passing in the PROJ_UID for the published project.- so the flow looks like this:

image

So the first part is monitoring my pwa for publishes, then if the Project Type is 0 (a normal project) it writes ProjectName, ProjectType and ProjectId to my PublishedProjects plan in my To do bucket and then makes the HTTP call:

image

This is making a POST to my Azure Function and passing in the body some json that contains the ProjectId.

First the PowerShell code in my Function reads this ProjectId from the incoming request, references the module it needs (which are also loaded into the Azure Function environment) and sets up the environment – including forming the OData Url to pull the ProjectName and ProjectWorkspaceInternalUrl for the specific project.

# POST method: $req
$requestBody = Get-Content $req -Raw | ConvertFrom-Json
$projID = $requestBody.projID


# GET method: each querystring parameter is its own variable
if ($req_query_name)
{
$projID = $req_query_name
}


#add SharePoint Online DLL - update the location if required
Import-Module "D:\Home\site\wwwroot\HttpTriggerPowerShell1\Microsoft.SharePoint.Client.dll"
Import-Module "D:\Home\site\wwwroot\HttpTriggerPowerShell1\Microsoft.SharePoint.Client.Runtime.dll"


#set the environment details
$PWAInstanceURL = "
https://brismithpjo.sharepoint.com/sites/pwa"

$username = "brismith@brismithpjo.onmicrosoft.com"
$password = "<password>"
$securePass = ConvertTo-SecureString $password -AsPlainText -Force


#set the Odata URL to get the Workspace Url
$url = $PWAInstanceURL + "/_api/ProjectData/Projects()?`$Filter=ProjectId eq GUID'$projID'&`$Select=ProjectName, ProjectWorkspaceInternalUrl"

Once we have this set then the next stage is to call the Odata Url and get that data.  I’m using a ForEach even though this will only return a single row – which makes the code re-usable in other scenarios too where you might be pulling more data.  I get my name and site Url as variables and also use these to construct my group name that I will be adding my users to – which will fit the default SahrePoint group of <Projectname> Members.  This assume that the project name is the same as the site name – be careful here as that may not be a safe assumption depending on character set and any special characters used.

#get all of the data from the OData URL
while ($url){
[Microsoft.SharePoint.Client.SharePointOnlineCredentials]$spocreds = New-Object Microsoft.SharePoint.Client.SharePointOnlineCredentials($username, $securePass);
$webrequest = [System.Net.WebRequest]::Create($url)
$webrequest.Credentials = $spocreds
$webrequest.Accept = "application/json;odata=verbose"
$webrequest.Headers.Add("X-FORMS_BASED_AUTH_ACCEPTED", "f")
$response = $webrequest.GetResponse()
$reader = New-Object System.IO.StreamReader $response.GetResponseStream()
$data = $reader.ReadToEnd()
$results = ConvertFrom-Json -InputObject $data
$results1 += $results.d.results
if ($results.d.__next){
$url=$results.d.__next.ToString()
}
else {
$url=$null
}
}


#for each project, create the list item - update the newitem with the correct list columns and project data
foreach ($projectrow in $results1)
{
$projectSiteURL = $projectrow.ProjectWorkspaceInternalUrl
$groupName = $projectrow.ProjectName + " Members"
}

Next I’m checking the site Url to see if this is in my pwasites site collection – as these are the only ones I’m interested in.  One thought I had was trying to make this differentiation in Flow and not even calling the function if I don’t need it – you can take that as homework.  For those projects whose site is in pwasites I then make a REST call to the ProjectServer endpoint to get the Ids of the resources in the project.  For each team member I then get their account and name from Odata – making sure I’m only getting ones that actually have a ResourceNTAccount – so taking my @team array and filtering down to a @teamusers array of ones that have accounts so I can add them to the SharePoint site.  The final step is making a SharePoint CSOM call adding my users to the group.

#Filter only for projects that are external - if $projectSiteURL contains pwasites
if ($projectSiteURL -like '*pwasites*'){


     #get all of the team members  - will include resources that are not users too.
$team = @()


        #set the REST URL project
$url = $PWAInstanceURL + "/_api/ProjectServer/Projects('$projID')/ProjectResources()?`$Select=Id"


        #get all of the data from the REST URL for the Project Team
while ($url){
[Microsoft.SharePoint.Client.SharePointOnlineCredentials]$spocreds = New-Object Microsoft.SharePoint.Client.SharePointOnlineCredentials($username, $securePass);
$webrequest = [System.Net.WebRequest]::Create($url)
$webrequest.Credentials = $spocreds
$webrequest.Accept = "application/json;odata=verbose"
$webrequest.Headers.Add("X-FORMS_BASED_AUTH_ACCEPTED", "f")
$response = $webrequest.GetResponse()
$reader = New-Object System.IO.StreamReader $response.GetResponseStream()
$data = $reader.ReadToEnd()
$results = ConvertFrom-Json -InputObject $data
$team += $results.d.results
if ($results.d.__next){
$url=$results.d.__next.ToString()
}
else {
$url=$null
}
}


    #get all of the team members login accounts and remove resources that are not users in PWA.
$teamusers = @()


        Foreach ($teammember in $team)
{
#set the resource ID
$teammemberID = $teammember.Id


                #set up the Odata URL
#$url = $PWAInstanceURL + "/_api/ProjectServer/EnterpriseResources('$teammemberID')/User?`$Select=LoginName,Id" #alternative to OData
$url = $PWAInstanceURL + "/_api/ProjectData/Resources()?`$Select=ResourceNTAccount,ResourceName&`$Filter=ResourceId eq guid'$teammemberID' and ResourceNTAccount ne null"


                #get all of the data from the OData URL for the project team members that are users in PWA
[Microsoft.SharePoint.Client.SharePointOnlineCredentials]$spocreds = New-Object Microsoft.SharePoint.Client.SharePointOnlineCredentials($username, $securePass);
$webrequest = [System.Net.WebRequest]::Create($url)
$webrequest.Credentials = $spocreds
$webrequest.Accept = "application/json;odata=verbose"
$webrequest.Headers.Add("X-FORMS_BASED_AUTH_ACCEPTED", "f")
$response = $webrequest.GetResponse()
$reader = New-Object System.IO.StreamReader $response.GetResponseStream()
$data = $reader.ReadToEnd()
$results = ConvertFrom-Json -InputObject $data
$teamusers += $results.d.results
}


    #add the user to the project site
Foreach ($teamuser in $teamusers)
{
$teamuserLogin = $teamuser.ResourceNTAccount
$teamusername = $teamuser.ResourceName

#get SP site client context
$ctx = New-Object Microsoft.SharePoint.Client.ClientContext($projectSiteURL)
$credentials = New-Object Microsoft.SharePoint.Client.SharePointOnlineCredentials($username, $securePass)
$ctx.Credentials = $credentials


                #get all the site groups on the Project Site
$projSiteGroups = $ctx.Web.SiteGroups
$ctx.Load.($projSiteGroups)


                #get the correct group to add the user into
$projSiteGroup = $projSiteGroups.GetByName($groupName)
$ctx.Load($projSiteGroup)

#add the user to the group on the Project Site
$projSiteUser = $ctx.Web.EnsureUser($teamuserLogin)
$ctx.Load($projSiteUser)
$teamMemberToAdd = $projSiteGroup.Users.AddUser($projSiteUser)
$ctx.Load($teamMemberToAdd)
$ctx.ExecuteQuery()
}
}

To test your function you can manually give it the project GUID of a known project with a site in your 'pwasites' site - simply using the json format:

{
  "projID": "77c4992f-562f-e711-80d3-00155de84000"
}

And once tested and working then use the Url for your function in the Flow HTTP step.  Get function Url is a link at top right of your Azure function editing page.

You could probably use Azure Automation for this too as I’m using PowerShell – and run this on a timer rather than triggered by Flow.  Really depends on your requirements.  I’ve also simplistically added all users to members, but you could potentially do a more granular setting of permissions and add to other group.  Subsequent publishes will add new users – but I don’t handle removing users who no longer have access.

Thanks again to Paul Mather – and these are his blog posts that inspired this post and video.

https://pwmather.wordpress.com/2017/07/07/projectonline-project-user-sync-to-project-sites-ppm-o365-powershell-sharepoint/

https://pwmather.wordpress.com/2017/07/28/running-projectonline-powershell-in-azure-using-azurefunctions-ppm-cloud-flow-logicapp-part1/

https://pwmather.wordpress.com/2017/08/01/running-projectonline-powershell-in-azure-using-azurefunctions-ppm-cloud-flow-logicapp-part2/

Microsoft Planner: A Change Management Solution for Office 365

$
0
0

MessageCenterToPlannerCodeSamplesV2

TL;DR version - Create a plan with buckets for you products, get an AppId for a Native App that can read SHD and read/write all groups – then update the various variables in Application Settings for the Function or directly in the PowerShell – and create your custom products.json.  If you need more pointers – read on…

*** Update 10/28/2017 - new sample code - corrected hard-coded tenant ID - added as a variable or appplication setting depending on the script. You can find it by going to the Admin Portal, then the Admin Center for Azure AD. then the Properties item under Manage – and the Directory ID is the GUID you are looking for. ***

 

*** Update 10/24 - I should have mentioned - works just fine if your plan is one hosted as a tab in Microsoft Teams too!  Find your PlanId using Graph Explorer  ***

I’ve commented before on the challenge that some of our Project and Planner customers have in keeping up with changes in Office 365 – particularly when they are not the Global Admin or service administrator and are not able to see the Message Center – and when their Global Admin isn’t always seeing the relevance or importance of the messages posted – so here is a potential solution!  How about an automated way that reads the messages and posts them to Planner – automatically placing the ones you are interested in to their own bucket and assigning to the person who owns that work stream.  Interested?  I thought so.  As a final enticement – here is what we will end up with.

image

The example I’ve put together is just a sample – no support or warranty, and there are a few different ways you might want to use it or configure it – and as I walk through the configuration I’ll point out some of the ideas I had along the way.  I decided to use Azure Functions – one to read the message center posts (on a timer), which then writes my chosen messages to an Azure storage queue and then a second Azure Function is triggered which reads from the queue and creates the Planner tasks.  It would also be possible to do all this in one PowerShell script from your desktop if you wanted to.  The function and script would need to run with the identity of an admin who can access the Message Center, so if you are reading this and can’t get to Message Center then you’ll need to work with your IT team and/or Global Admin to make this all happen.  Tell them I sent you.

In my example I am interested in posts about Project, Planner, SharePoint, Skype, Yammer and Teams, and I have these associated with the resources in my team.  I am using the title of the message center post and matching on my products – and if two products are mentioned then I put a task in each bucket (See the Skype for Business integration in Yammer task in the above screenshot).  I did wonder about having an ‘other’ bucket too so that nothing got lost that just didn’t happen to mention the product of interest in the title.  I decided to hold these relationships in a json file and hard code the Ids for the related buckets and the resource I was going to assign – just to avoid additional work in the function to pull static information.  Likewise I hold the Id of my Plan in the application settings.  One possible variant on this would be to also hold the Plan Id in the json file if you wanted to spread the different messages across different plans.

In this post I’ll just be getting things set up and pasting in the PowerShell and getting things working – I plan on a follow up to dig deeper into the actual PowerShell and some of the choices I made.

Step 1 – Create a Plan and capture some constants

Nothing special needed here, just a blank plan with members added and buckets created for your desired products.  I’ll create a new one as I walk through – and mine will be called Office 365 Change Management.  I’ve set it to Public and checked the subscribe option so my members will get e-mails.

image

I add my members and buckets.

image

and now is a great time to capture my planId – which can be seen at the end of the Url:

….Home/Planner/#/plantaskboard?groupId=7497796a-da37-4d4e-ad2e-5e021de9b7ea&planId=SeQaqSFupUyLdipuqVNVqmQACp9Y

One other thing I need to do for setup is to set the Category names.  As this is just a one-off I didn’t code it in the function – as it is global to the Plan (but you could add as a ‘first run’ option I guess).  I just created a test task in my Planner and set the categories to match some of the flags used in the Message Center.  Even after deleting this test task the categories are set for any other tasks added to the Plan.

image

I also need the Ids of my buckets and resources – and for these the easiest way is to use the Graph Explorer.  If you log in to your appropriate Office 365 tenant and then make a Get call v1.0 to a Url like this one https://graph.microsoft.com/v1.0/Planner/Plans/<your plan id>/buckets that will return a list of your buckets and the piece you need is the id.  Worth saving off the full response to a file.

image

To get the members you can get the ‘owner’ field for the Plan (this is actually the Group Id) using

https://graph.microsoft.com/v1.0/Planner/Plans/<Your plan id>/

and then use this in the Group Graph call for members

https://graph.microsoft.com/v1.0/Groups/<the owner id from the plan>/members

Here is my first member – and I’ve highlighted the id for Kari.

image

Step 2 – The products.json file

Now I have my products, buckets and resource ids I can create my json lookup file that I’ll be using from PowerShell.  The format is as follows – obviously your ids will be different and feel free to choose different products – this file drives the selection, placement and assignment of messages to tasks.

[
{
"product": "Yammer",
"bucketId": "udVj17TygU-iF9wlgETsa2QAHIYM",
"assignee": "af9cbf99-1968-4524-9ae1-96d7fc4932f8"
},
{
"product": "Project",
"bucketId": "H8oHwSPnI0ySTzCNl4z0gWQAJcWf",
"assignee": "fe5536f0-70ec-468f-8ab1-96179d84fd25"
},
{
"product": "Planner",
"bucketId": "0-VuIgIqNEeKz9OfFaGkWWQAI0zH",
"assignee": "cf091cb1-dc23-4e12-8f30-b26085eab810"
},
{
"product": "Teams",
"bucketId": "A3ul3Cf7ME20jWYiCuHIpWQAHgKO",
"assignee": "f8ab6bf7-cc84-44b4-944a-2ed564a92709"
},
{
"product": "Skype",
"bucketId": "Hcds6YIZZk-XIQ0RU6wWqWQAHjHh",
"assignee": "58a704f4-4aa6-4002-9d1a-4fd0afc7d3bf"
},
{
"product": "SharePoint",
"bucketId": "lrLVQ7Z1X0G6NWmfAAOzrWQAK9M6",
"assignee": "cf091cb1-dc23-4e12-8f30-b26085eab810"
}
]

Step 3 – We need to get an Application Id from Azure AD

An Application Id, sometimes called an AppId or a ClientId is a code from Azure that has certain permissions associated with it and is then used, along with your credentials, to get a token that allows you to do stuff with the various APIs.  In my scenario we will be hitting both the Office 365 Service Communications API as well as the Graph API.  I’ll be using the same AppId for both and setting the necessary permissions but you could also use two different AppIds as the different APIs are used by my two different functions.  To get an Application Id you need to navigate to the Azure AD Admin Center for your Office 365 account (which may not be the same account as your Azure Portal account).  From the Office 365 Admin Portal navigate to the Admin Centers list in the left navigation and Azure AD is usually found at the bottom.

image

Once you land at aad.portal.azure.com you click on App registrations option where we will create a new App registration and get our Id.

image

Clicking on New application registration at the top of the App Registration blade and enter a name for your App (it can be anything) and make sure you select Native, and the Redirect URI can be your O365 tenant.  The Create button is at the foot of the blade (and move the focus to another field if if isn’t active).

image

Next select the App registration you just created and copy the Application ID from the Essentials pane for use later – and click All settings so we can give it some permissions – by selecting Required permissions on the next blade.

image

I’ll be setting all permissions required on this single App ID – but if you wanted to split then then the function that writes the plans needs one more added to the Windows Azure Active Directory API – of Read and write all Groups (sign in and read user profile will already be selected). and then you will also need to click Save, and then Grant Permissions (and Yes to the dialog) in the left blade.  Granting permissions for a Native App is akin to the dialog you will have no doubt seen in some web apps where you have to acknowledge that the App can do stuff for you before it takes you to the App itself.

image

For the other permissions required to read the message center posts we will be adding another API by selecting the Add option and Select an API and choose Office 365 Management APIs. From the list of delegated permissions we only need to select the top one (as I write this anyway) Read service health information for your organization.  Worth scanning the others just to give you an idea what other data this API may allow you to read for other applications.  Select, Done and then Grant Permissions will finish our step (and you did copy the Application ID didn’t you?).  Our permissions page showing the API we just added should look like this:

image

Step 4 – Prepare our Azure Function App

(If you’d rather just run the PowerShell from your desktop – and you have the permissions then you can skip the Azure stuff – meet me again at Step 5 where I’ll be walking through the actual PowerShell code in the functions)

If you already have an Azure subscription then you are good to go – or you can create a free account if you just want to try things out.  You get a $200 credit (or local equivalent) and you also get a free amount of execution time each month (400,000 GB-s) and 1 million executions.  There is also a small cost for the storage account which also gets used for the queue. https://azure.microsoft.com/en-us/free/

Once you have a subscription then Click New in the upper left and in the Compute section you will find Function App.  Once clicked it will prompt you for a name (which needs to be unique across Azure – I usually prefix things I’m working on with my alias.  You also select your subscription and either create a new or use and existing resource group (I’m using a new one – it makes cleaning up easier that I can just delete the resource group when I’m finished – if this is a keeper for you then you might want to use an existing).  I’m going with the consumption plan – West US location and a new storage account and finally turning on Application Insights – which gives some useful telemetry on your working system.  Check Pin to dashboard so you can easily find your Function App again and click Create.

imageIt just takes a few minutes to deploy and will show on your dashboard when it is ready.  In subsequent steps we will be adding our actual functions but first we will configure several application settings – basically some of the variables that you need that are easier to pull in to the function rather than have to hard code wherever you need them.  Once the blade for your new Function App opens you will see a section bottom right labelled Configured features – and we are headed to Application settings to set the settings for our application:

image

There are a bunch of settings already configured – we will be adding the following:

*** added tenantId 10/28/2017 - You can find it by going to the Admin Portal, then the Admin Center for Azure AD. then the Properties item under Manage – and the Directory ID is the GUID you are looking for. ***

Variable                         Value                                                                       Purpose

aadtenant                     <yourtenant>.onmicrosoft.com                              The tenant we are working with

aad_username              <yourname>@<yourtenant>,onmicrosoft.com      The login we will be using

aad_password              ***************************                                         Password for above

clientId                         <your application id from step 3>                           Identify our app and get the permissions

messageCenterPlanId <the PlanId of the Plan from Step 1>                      Make sure we write to the right plan

tenantId                         <see above>                                                                  Identifies your tenant for the api calls

In my case as I am the only user of my Azure account I’m ok with a clear text password being saved (and it is only a demo tenant) – there are ways to store and use an encrypted version – searching the web should find some solutions – including Azure Key Vault.  I found tis example https://blog.tyang.org/2016/10/08/securing-passwords-in-azure-functions/ - Once you’ve added all the entries there is a Save option at the top of the page.  Other settings will get added as we create our functions.

image

Step 5 – Our Azure Functions at last!

To create our first function – which will be the one that reads the messages based on a timer (hourly for starters) and writes to a storage queue we first hit the + sign next to Functions:

image

 There is a new Wizard for premade functions – but for the PowerShell one I am creating I’ll go for the Custom function option in the lower part of the screen:

image

I choose PowerShell in the language dropdown (but take a look at the other options while you are there) and the middle one is the one for us – TimerTrigger – PowerShell.  I’ll call it ReadMessagesOnTimer and set the schedule to hourly with the cron format of 0 0 * * * * and click Create (Daily may be fast enough in production – although if you were using this to read service health data and not messages then hourly may be appropriate.  See https://codehollow.com/2017/02/azure-functions-time-trigger-cron-cheat-sheet/ for a good cron reference for Azure Functions.

image

The initial script just outputs a timestamp of when the function was executed.  We have some other settings to add before we paste in the script. 

image

If we go to the View files tab on the right there are a couple of files we need to upload.  One is the products.json file we created earlier, and the second is a dll that we need for the Azure (adal) authentication.  This is called Microsoft.IdentityModel.Clients.ActiveDirectory.dll and can be found in C:\Program Files (x86)\Microsoft SDKs\Azure\PowerShell\ServiceManagement\Azure\Services by default – and if you don’t see it then you probably need the Azure SDK https://azure.microsoft.com/en-us/downloads/.  Upload both of these files.

Next in the Integrate section under our function we will add a new output – for writing to our storage queue.  After we click New output we can choose Azure Queue Storage from the panel and click Select.

image

For the Message parameter name and Queue name I’ll leave the default, and for the Storage account connection I’ll choose AzureWebJobsStorage from the dropdown – then click Save.  And while I’m here I’ll also Create a new Function triggered by this output by clicking Go,

image

Again I will choose PowerShell from the Language dropdown – name my function WriteTaskToPlan and click Create.

image

My second function also needs to have the dll uploaded but doesn’t need the products file as I do all the selection and tagging in the first function and write that out to the queue.

Now for the fun stuff – pasting in the PowerShell code itself and running!  The code is attached in ReadMessageOnTimer.txt and WriteTaskToPlan.txt.  We can start with the first one first – paste it in and there shouldn’t need to be any need for edits unless you have deviated from my naming – one potential change is in the path to the uploaded documents – which includes the name of the function – so edit as necessary.  You can run this just using the Run option – and you will see an exception first time as the queue gets created if it doesn’t exist – and it will populate the queue with messages.  If I navigate back to the Integrate section for my WriteTaskToPlan function (which we haven’t pasted in yet, unless you are ahead of me) and change the Queue name to the real one I am using “message-center-to-planner-tasks” we should quickly see that our queue gets drained – assuming you have a way to monitor the queue!  This is where Azure Storage Explorer comes in handy – see the foot of the https://azure.microsoft.com/en-us/downloads/ page under Standalone tools.

In this screenshot I managed to see some of the messages before they were picked up by the 2nd function (which isn’t actually creating tasks yet).

image

The storage account is my brismitho365mcstore and I can see my message-center-to-planner-tasks queue.  We can also monitor function activity via the Monitor section under each of the functions – and this shows the ‘empty’ function pulling messages from the queue:

image

If I now overwrite the current contents (2 lines) of my WriteTaskToPlan function with the contents of WriteTaskToPlan.txt – updating the paths to the files as necessary if the name isn’t the same as mine – then I can save and I should now have a working system.  I can either wait until the top of the hour – or just run the ReadMessageOnTimer function manually to check that all is working.  Just to show some of the debugging capabilities I ‘forgot’ to load the dll – so in my case I had a number of failures (it tries each queue message 3 times before giving up) and here you can see the really useful log info that shows that my dll wasn’t found (Thinking back I probably could have loaded it at the wwwroot level to serve both functions too):

image

With my dll uploaded and another manual run of my function we are in business!

image

And to show what the tasks look like when I drill in – lets take a look at a rich content message (Actually a test sent just to my tenant with a Planner title - MC123579 - Planner: test format targeted post (we have a bug currently that the thumbnail for the image isn’t showing) – but first this is the original message in the Message Center – complete with a product link and a YouTube video:

image

Then in Planner we don’t have the richness so instead I trawled the text for Urls and added them as attachments – so the person who is assigned the task in Planner can still review the content (and the potential target audience here is people who can’t access the Message Center as they are not global admins remember).  You can also see here the categories set from metadata in the message.

image

Step 5 – “Azure Functions are not something I want to play with right now”

That turned into a pretty long blog – even if it is mostly pictures – and I really wanted to step through the code too to explain what I was doing – as I’m sure even if you don’t want to do this precise thing there may be parts that are useful – but I’ll save that for another blog.

This last part is taking the logic of the two functions and just putting them into one PowerShell that you could run from your desktop (assuming the right permissions, the Office 365 PowerShell connectivity – see https://technet.microsoft.com/en-us/library/dn975125.aspx and also the Azure SDK stuff https://azure.microsoft.com/en-us/downloads/ (PowerShell is listed half way down in the command-line tools). 

You will still need to do Steps 1 to 3 to create your Plan, get your bucket and resource info and create your products.json – and register an ApplicationId – and then the PowerShell is in the attachment – MCtoPlannerFull.txt.  You will need to edit the various constants – all identified by <something that needs editing> type text.  The main differences are references to local files – not going via a queue, and looping round the entire set of messages as it writes them out.

Hopefully the copy/paste of the text hasn’t broke anything – but always worth looking for the quotes or dashes – just in case they have been changed by one of the editors I’ve used!

This is just a sample – and there are many ways you could change things about – from pulling the other message types, such as service information, or writing out to other applications like Yammer or Teams – that is the beauty of the Graph APIs – once you get familiar with them they open up a whole world of applications.  And maybe the Message Center will get the PowerApp/Flow treatment – and make this even easier!

References:

The scripts are attached in a zip file – along with a sample products.json – and the scripts also contain this disclaimer:

Sample scripts are not supported under any Microsoft standard support program or service. The sample scripts are provided AS IS without warranty of any kind. Microsoft disclaims all implied warranties including, without limitation, any implied warranties of merchantability or of fitness for a particular purpose. The entire risk arising out of the use or performance of the sample scripts and documentation remains with you. In no event shall Microsoft, its authors, or anyone else involved in the creation, production, or delivery of the scripts be liable for any damages whatsoever (including, without limitation, damages for loss of business profits, business interruption, loss of business information, or other pecuniary loss) arising out of the use of or inability to use the sample scripts or documentation, even if Microsoft has been advised of the possibility of such damages.

 

 

Office 365 Message Center to Planner: PowerShell walk-though–Part 1

$
0
0

*** Update 10/28/2017 - made code correction mentioned below - setting and using an environment variable for my tenantId

$uri = "https://manage.office.com/api/v1.0/" + $env:tenantId + "/ServiceComms/Messages"
$messages = Invoke-WebRequest -Uri $uri -Method Get -Headers $headers -UseBasicParsing

***

I mentioned in my previous blog post - https://lwal.me/3n - that I’d walk through the PowerShell – so here it is, at least the first part.  Hopefully this will help answer questions like “What was he thinking of!” – and “Why code it like that?” and maybe the answer will be that I didn’t know any better – so happy for comments back on this – but there will sometimes be a valid reason for some strange choices.  I’ll just go through the two Azure Functions scripts (the first here where I read the Messages, and creating the tasks in Part 2) – but the logic is the same in the full PowerShell only version – just a bigger loop.

#Setup stuff for the O365 Management Communication API Calls

$password = $env:aad_password | ConvertTo-SecureString -AsPlainText -Force

$Credential = New-Object -typename System.Management.Automation.PSCredential -argumentlist $env:aad_username, $password

Import-Module "D:\home\site\wwwroot\ReadMessagesOnTimer\Microsoft.IdentityModel.Clients.ActiveDirectory.dll"

$adal = "D:\home\site\wwwroot\ReadMessagesOnTimer\Microsoft.IdentityModel.Clients.ActiveDirectory.dll"
[System.Reflection.Assembly]::LoadFrom($adal)

$resourceAppIdURI = “https://manage.office.com”

$authority = “https://login.windows.net/$env:aadtenant”

$authContext = New-Object "Microsoft.IdentityModel.Clients.ActiveDirectory.AuthenticationContext" -ArgumentList $authority
$uc = new-object Microsoft.IdentityModel.Clients.ActiveDirectory.UserCredential -ArgumentList $Credential.Username,$Credential.Password

$manageToken = $authContext.AcquireToken($resourceAppIdURI, $env:clientId,$uc)

The first few lines are setting up the Office 365 Management Communication API (Preview) connection.  Worth noting the ‘Preview’ there – as this is subject to change and might break at any point – so best keep an eye on it.  Once it is GA I’ll modify these scripts as necessary.  I’m storing the password as a variable in my Application Settings for the Function App hosting my fuinctions – and these are accessed via the $env: prefix.  As I mentioned in the previous blog – I am the only person with access to my Azure subscription so I’ve stored in App setting as plain text – but you might want to handle this more securely if you share subscriptions.  I’m then getting a credential object.  The dll for ADAL is also required – so is uploaded to the Function and the root directory for the functions is d:\home\wwwroot\<FunctionName>.

The endpoint I need to authenticate to and get my token is https://manage.office.com.  I also need to pass in my authority Url, and this is my tenant added to https://login.windows.net/.  Both Graph and the Manage API required App and User authentication – so this is why I need both the user credentials and the Application ID (clientId) – the latter is also stored in my environment variables for the Function App.

#Get the products we are interested in
$products = Get-Content 'D:\home\site\wwwroot\ReadMessagesOnTimer\products.json' | Out-String | ConvertFrom-json

The next part gets my products from the json file – and I chose to use a single plan and then push into Buckets by product and make assignments by product.  You could easily add PlanId at each product level here – and write to more than one plan.  Adding a new product is as easy as creating a new Bucket, getting the Id and the Id of the person handling the messages for that product and extending the json file accordingly.  On next run it will populate the new bucket – if there are any messages.

$messages = Invoke-WebRequest -Uri "https://manage.office.com/api/v1.0/d740ddf6-53e6-4eb1-907e-34facc13f08b/ServiceComms/Messages" -Method Get -Headers $headers -UseBasicParsing
$messagesContent = $messages.Content | ConvertFrom-Json
$messageValue = $messagesContent.Value
ForEach($message in $messageValue){
If($message.MessageType -eq 'MessageCenter'){

I really should have taken that GUID and put in a variable – or at least explained what it is.  That is the tenant identifier for my Office 365 tenant.  You can find it by going to the Admin Portal, then the Admin Center for Azure AD. then the Properties item under Manage – and the Directory ID is the GUID you are looking for.  I’ll revise the code with a $env: variable for this shortly.  The json returned is turned into a PowerShell object – which is an array containing all the messages – both SHD and Message Center.  I get the value from these messages into my messageValue array – then I can loop through all the individual messages, and am only interested in the ones of type ‘MessageCenter’.

ForEach($product in $products){
If($message.Title -match $product.product){
$task = @{}
$task.Add('id', $message.Id)
$task.Add('title',$message.Id + ' - ' + $message.Title)
$task.Add('categories', $message.ActionType + ', ' + $message.Classification + ', ' + $message.Category)
$task.Add('dueDate', $message.ActionRequiredByDate)
$task.Add('updated', $message.LastUpdatedTime)
$fullMessage = ''
ForEach($messagePart in $message.Messages){
$fullMessage += $messagePart.MessageText
}
$task.Add('description', $fullMessage)
$task.Add('reference', $message.ExternalLink)
$task.Add('product', $product.product)
$task.Add('bucketId', $product.bucketId)
$task.Add('assignee', $product.assignee)

The next section is looping through my products and matching product names to titles of the message center posts.  There are other fields returned that look more promising to use, but I found that they were not reliable as they were sometimes blank.  I have discussions started with the team to see if we can fix that from the message generation side.  I also chose to create multiple tasks if there were multiple products in the title.  It does look like the other potential fields I would prefer to use are also arrays – so multiple products should still be possible if I changed to WorkloadDisplayName or AffectedWorkloadDisplayName, or even AppliesTo.

Once I have a match I populate the Id, the title (with the Id prepended), then make a list of categories with the contents of ActionType, Classification and Category.  This may be another area where we can tighten up on the usage of these fields.  I set a dueDate if there is one and also get the lastUpdatedTime.  I’m not using that yet, but relying on updated titles for new postings.  Probably an area for improvement – but when they are not a huge number of records I wasn’t too bothered about trimming down the payload too much.

For the actual message this can be in multiple parts – more often used for the Service Health Dashboard where we issue updates as the issue progresses – but thought it made sense to include that in my code too.  I add any ExternalLink items as reference – then finally add the bucketId and assignee.  Doing that here saves me re-reading the product.json in the other function for each task request.

#Using best practice async via queue storage

$storeAuthContext = New-AzureStorageContext -ConnectionString $env:AzureWebJobsStorage

$outQueue = Get-AzureStorageQueue –Name 'message-center-to-planner-tasks' -Context $storeAuthContext
if ($outQueue -eq $null) {
$outQueue = New-AzureStorageQueue –Name 'message-center-to-planner-tasks' -Context $storeAuthContext
}

# Create a new message using a constructor of the CloudQueueMessage class.
$queueMessage = New-Object `
-TypeName Microsoft.WindowsAzure.Storage.Queue.CloudQueueMessage `
-ArgumentList (ConvertTo-Json $task)

# Add a new message to the queue.
$outQueue.CloudQueue.AddMessage($queueMessage)
}
}
}
}

I did initially plan to just call my other function at this point but reading up on Function best practices it looked like I should use a Storage Queue, so finding a good reference - http://johnliu.net/blog/2017/6/azurefunctions-work-fan-out-with-azure-queue-in-powershell I took that direction.  Pretty simple – just got my storage context and then create my queue if it doesn’t already exist.  Then I can just convert my $task object to json and pass this in as my argument and this will add each of my tasks to the queue – ready to be picked up.  And I will pick this back up in Part 2!

Office 365 Message Center to Planner: PowerShell walk-through–Part 2

$
0
0

The code I am walking through here is that which drives the sample I blogged about in the posting https://blogs.msdn.microsoft.com/brismith/2017/10/23/microsoft-planner-a-change-management-solution-for-office-365/

In Part 1 I walked through the PowerShell that was reading the messages, filtering out the ones I was interested in by product and then adding the required metadata to get them to the right plan, bucket and assignee – and finally writing that to a storage queue in Microsoft Azure.  Be sure to go back if you didn’t see the update regarding the tenantId – which I had incorrectly hard-coded in the original post and since corrected.

In this part I’ll explain the code that is picking up the messages from the storage queue and creating the tasks.  The first chunk of code is setting things up:

$in = Get-Content $triggerInput -Raw
$messageCenterTask = $in | ConvertFrom-Json
$title = $messageCenterTask.title

# BriSmith@Microsoft.com https://blogs.msdn.microsoft.com/brismith

# Code to read O365 Message Center posts from the message queue and create Planner tasks

#Setup stuff for the Graph API Calls

$password = $env:aad_password | ConvertTo-SecureString -AsPlainText -Force

$Credential = New-Object -typename System.Management.Automation.PSCredential -argumentlist $env:aad_username, $password

Import-Module "D:\home\site\wwwroot\WriteTaskToPlan\Microsoft.IdentityModel.Clients.ActiveDirectory.dll"

$adal = "D:\home\site\wwwroot\WriteTaskToPlan\Microsoft.IdentityModel.Clients.ActiveDirectory.dll"
[System.Reflection.Assembly]::LoadFrom($adal)

$resourceAppIdURI = “https://graph.microsoft.com”

$authority = “
https://login.windows.net/$env:aadTenant”

$authContext = New-Object "Microsoft.IdentityModel.Clients.ActiveDirectory.AuthenticationContext" -ArgumentList $authority
$uc = new-object Microsoft.IdentityModel.Clients.ActiveDirectory.UserCredential -ArgumentList $Credential.Username,$Credential.Password

$graphToken = $authContext.AcquireToken($resourceAppIdURI, $env:clientId,$uc)

$messageCenterPlanId= $env:messageCenterPlanId

The first few lines are pulling items from the queue – and I’m not doing anything with the title at that point – I was using that when I was testing the code.  The setup stuff for the Graph calls is very similar to that used in Part 1 for the calls to the Service Management API – just with a different Url – this time going to https://graph.microsoft.com.  My $graphToken is used for the subsequent calls.  I’m getting the planId from my Function App application settings environment variables – but as mentioned before if you wanted to perhaps have different products in different plans this could be an extension to the products.json and added to the data going to the storage queue.

#################################################
# Get tasks
#################################################

$headers = @{}

$headers.Add('Authorization','Bearer ' + $graphToken.AccessToken)
$headers.Add('Content-Type', "application/json")

$uri = "https://graph.microsoft.com/v1.0/planner/plans/" + $messageCenterPlanId + "/tasks"

$messageCenterPlanTasks = Invoke-WebRequest -Uri $uri -Method Get -Headers $headers -UseBasicParsing

$messageCenterPlanTasksContent = $messageCenterPlanTasks.Content | ConvertFrom-Json
$messageCenterPlanTasksValue = $messageCenterPlanTasksContent.value
$messageCenterPlanTasksValue = $messageCenterPlanTasksValue | Sort-Object bucketId, orderHint

#################################################

# Check if the task already exists by bucketId
#################################################
$taskExists = $FALSE
ForEach($existingTask in $messageCenterPlanTasksValue){
if(($existingTask.title -match $messageCenterTask.id) -and ($existingTask.bucketId -eq $messageCenterTask.bucketId)){
$taskExists = $TRUE
Break
}
}

Next I am getting the existing tasks from the plan – so if you did write different products to different plans this part would need a change.  I’m making a call to the Graph API and getting the tasks for my specific planId – then getting these into an object and looping through and comparing to the message that I pulled from the storage queue.  This might be worth some extra work as all I’m doing is checking if my existing title contains the id of my new message.  If you remember my task title is a concatenation of message id + message title.  It isn’t unknown that a message gets updated – and sometimes the title changes – but the id will not.  I’d miss the updates with this code.  There is a date you could also use and I did consider adding this somewhere I could reference.  It would be easiest adding it in to the title – as if you use something like the first characters of the description it would require another call to task details.  If there were thousands of messages then might even be worth holding that somewhere in an Azure table for reference – but that seemed overkill when I’m only pulling a couple of dozen messages.  YMMV.

# Adding the task
if(!$taskExists){
$setTask =@{}
If($messageCenterTask.dueDate){
$setTask.Add("dueDateTime", ([DateTime]$messageCenterTask.dueDate))
}
$setTask.Add("orderHint", " !")
$setTask.Add("title", $messageCenterTask.title)
$setTask.Add("planId", $messageCenterPlanId)

# Setting Applied Categories

$appliedCategories = @{}
if($messageCenterTask.categories -match 'Action'){
$appliedCategories.Add("category1",$TRUE)
}
else{$appliedCategories.Add("category1",$FALSE)}
if($messageCenterTask.categories -match 'Plan for Change'){
$appliedCategories.Add("category2",$TRUE)
}
else{$appliedCategories.Add("category2",$FALSE)}
if($messageCenterTask.categories -match 'Prevent or Fix Issues'){
$appliedCategories.Add("category3",$TRUE)
}
else{$appliedCategories.Add("category3",$FALSE)}
if($messageCenterTask.categories -match 'Advisory'){
$appliedCategories.Add("category4",$TRUE)
}
else{$appliedCategories.Add("category4",$FALSE)}
if($messageCenterTask.categories -match 'Awareness'){
$appliedCategories.Add("category5",$TRUE)
}
else{$appliedCategories.Add("category5",$FALSE)}
if($messageCenterTask.categories -match 'Stay Informed'){
$appliedCategories.Add("category6",$TRUE)
}
else{$appliedCategories.Add("category6",$FALSE)}

$setTask.Add("appliedCategories",$appliedCategories)

If the task doesn’t exist then I need to add it.  I’ll take this in a couple of chunks and this first part starts building my $setTask object by taking data from my $messageCenterTask and setting the appropriate properties.  First I set a dueDate if one exists, then add the orderHint to add this to the end and set the PlanId and title.

The categories was a tricky one as there are a number of different fields in the message center that carry status information – so I looked across all of them and decided which ones were worth exposing.  This is hard coded based on how you set the categories in your plan – but you can see from my code how I am turning on the individual categories based on the presence of the terms in my array of values in my $messageCenterTask.categories.  So this is the part that turns the coloured tabs on.

# Set bucket and assignee

$setTask.Add("bucketId", $messageCenterTask.bucketId)

$assignmentType = @{}
$assignmentType.Add("@odata.type","#microsoft.graph.plannerAssignment")
$assignmentType.Add("orderHint"," !")
$assignments = @{}
$assignments.Add($messageCenterTask.assignee, $assignmentType)
$setTask.Add("assignments", $assignments)

# Make new task call

$Request = @"

$($setTask | ConvertTo-Json)
"@

$headers = @{}

$headers.Add('Authorization','Bearer ' + $graphToken.AccessToken)
$headers.Add('Content-Type', "application/json")
$headers.Add('Content-length', + $Request.Length)
$headers.Add('Prefer', "return=representation")

$newTask = Invoke-WebRequest -Uri "https://graph.microsoft.com/v1.0/planner/tasks" -Method Post -Body $Request -Headers $headers -UseBasicParsing
$newTaskContent = $newTask.Content | ConvertFrom-Json
$newTaskId = $newTaskContent.id

Continuing my $setTask object I add in the bucketId and add my $messageCenterTask.assignee.  This is actually an array which is why I set up the assignmentType then add it to the ‘assignments’.

I have all I need for my new task now – so I build up the request by converting my $setTask to json and configure my header then make the POST call to the Graph API.  Running this in an Azure Function requires the –UseBasicParsing parameter as the environment is somewhat limited and does not have the full IE engine.

I grab the returned json and pull the task Id out by converting the Content from json to a PowerShell object and getting the .id property.  I’ll need this to be able to add the rest of the task details.

# Add task details
# Pull any urls out of the description to add as attachments
$matches = New-Object System.Collections.ArrayList
$matches.clear()
$regex = 'https?:\/\/(www\.)?[-a-zA-Z0-9@:%._\+~#=]{2,256}\.[a-z]{2,6}\b([-a-zA-Z0-9@:%_\+.~#?&//=]*)'
# Find all matches in description and add to an array
select-string -Input $messageCenterTask.description -Pattern $regex -AllMatches | % { $_.Matches } | % {     $matches.add($_.Value)}

#Replacing some forbidden characters for odata properties

$externalLink = $messageCenterTask.reference -replace '\.', '%2E'
$externalLink = $externalLink -replace ':', '%3A'
$externalLink = $externalLink -replace '\#', '%23'
$externalLink = $externalLink -replace '\@', '%40'

$setTaskDetails = @{}

$setTaskDetails.Add("description", $messageCenterTask.description)
if(($messageCenterTask.reference) -or ($matches.Count -gt 0)){
$reference = @{}
$reference.Add("@odata.type", "#microsoft.graph.plannerExternalReference")
$reference.Add("alias", "Additional Information")
$reference.Add("type", "Other")
$reference.Add('previewPriority', ' !')
$references = @{}
ForEach($match in $matches){
$match = $match -replace '\.', '%2E'
$match = $match -replace ':', '%3A'
$match = $match -replace '\#', '%23'
$match = $match -replace '\@', '%40'
$references.Add($match, $reference)
}
if($messageCenterTask.reference){
$references.Add($externalLink, $reference)
}
$setTaskDetails.Add("references", $references)
$setTaskDetails.Add("previewType", "reference")
}
Start-Sleep 2

Adding the task details is basically adding the description, and any references.  Here there may be defined references such as the ‘additional information’ that I pulled through as a true $messageCenterTask.reference but I also used this field for another purpose.  Message Center posts can now be very rich – so can include videos and other Urls pointing to other documents, the video thumbnail etc.  As the Planner description cannot handle this in terms of displaying I chose to add any Urls found in the description text itself as additional references – for ease of linking – so you could easily navigate out to YouTube for example to view a pertinent video.  That is what the regex command is doing – by finding all occurrences of Urls and adding them to the $match array.

For both my true reference ($externalLink) and my found Urls ($matches) I need to do some replacement of certain characters.  This isn’t possible using a full ‘encode’ option – it just needs . : # and @ replacing to avoid some disallowed odata characters.

To add the references I first check if I have any – then  set up the static info for reference objects, then add the matches and add the externalLink if there is one – and set the previewType to reference (which adds the reference as the object ot show on the task tile).  I think we have a current bug with some types of references not rendering – so you may not see the image you are expecting quite yet.

The last line – Start-Sleep 2 was added when I was seeing failures adding the task details probably due to the task not yet being in a state where it could be edited when I make the call in the next chunk of code.  I’m sure there is a tidier way of handling this – but it worked and haven’t revisited it.

#Get Current Etag for task details

$uri = "https://graph.microsoft.com/v1.0/planner/tasks/" + $newTaskId + "/details"

$result = Invoke-WebRequest -Uri $uri -Method GET -Headers $headers -UseBasicParsing

$freshEtagTaskContent = $result.Content | ConvertFrom-Json

$Request = @"

$($setTaskDetails | ConvertTo-Json)
"@

$headers = @{}

$headers.Add('Authorization','Bearer ' + $graphToken.AccessToken)
$headers.Add('If-Match', $freshEtagTaskContent.'@odata.etag')
$headers.Add('Content-Type', "application/json")
$headers.Add('Content-length', + $Request.Length)

$uri = "https://graph.microsoft.com/v1.0/planner/tasks/" + $newTaskId + "/details"

$result = Invoke-WebRequest -Uri $uri -Method PATCH -Body $Request -Headers $headers -UseBasicParsing

}
#Write-Output "PowerShell script processed queue message '$title'"

To update the task I need to get the current Etag for the details entity of my new task – so the GET call to the specific new task Id /details ensures I have that ($freshEtagTaskContent.’@odata.etag’) for the header for my subsequent PATCH call.  Then it is very similar to the /tasks call – I convert my $setTaskDetails object to json as my request body, create my header and make the patch call.  I was using the final Write-Output to double check what I was writing out – and you can see this in the function activity if you are debugging.

In my tests when running the initial function manually from the Azure portal it only takes about 5 or 6 seconds, but when looking at my hourly runs from the ‘Monitor’ option for the function I’m seeing 1 to 2 minutes, I guess because it needs loading up from cold.  Once this runs it pushes data into the storage queue – and the subsequent function gets triggered for each row (as I write this it finds 10 messages with my products) and each of those jobs takes just 3 or 4 seconds if the task already exists – and only a few seconds more if it needs to create the task.  Monitoring the storage queue using the Azure Storage Explorer I see the items picked up in 30 seconds or so – but seems much longer when I’m demonstrating the sample Winking smile.

 

Project Online: Changes to Granularity of Time phased OData

$
0
0

This was announced a while back as you needed to be aware of the impact on reporting – and you should have seen the posting in the Office 365 Admin message center.  This is rolling out now, and the message center post has just been updated to say that we should be completed by around the end of the year.  (If you are not seeing the message center posts – then talk to your admin and share my blog post https://lwal.me/3n).  The official article on the time phased data roll-up feature can be found at https://support.office.com/en-us/article/Configure-rollup-of-timephased-reporting-data-in-Project-Online-da8487fe-899e-4510-a264-e2ebc948928c.  I’ve also been beaten to the blogs by Paul Mather (and probably others) so I’ll try not to just repeat the great content that is already out there – but highlight a few of the nuances.

The change has a few different aims – and one is certainly to give you faster reporting against your Project Online data.  This can be achieved if the new granularities of weekly or monthly work for you better than daily – with the added bonus that the publish will be a lot faster too!  We don’t have to break the project data down to a daily level for publishing, then have you pull the daily data only to have you aggregate it back to weekly/monthly again.  It will save space too – both in your tenant but also in any local data warehouse you are using.

The first thing to say is we aren’t changing anything automatically.  We are changing the OData schema so you may need to change your reports to ensure the schema changes do not break anything – even if you stick with ‘Daily’ – see this article for more information on best practices.

image  image

If you create a new PWA however, we do now set the default granularity as ‘Never’ so that you will need to make a conscious choice on the granularity you want – or leave it as ‘Never’ if you do not need time phased data (your publish times will be faster!)

I’ll walk through some scenarios, and I’ve also changed my regional site settings to make it more interesting – with my week starting on a Saturday and the first week of the year being defined as the First full week.

image

I’ll use a dummy project that I’ve created with work, costs, baselines etc. – just so we can see what happens to the data in various scenarios.  This is my plan – with 3 work resources, a material resources, a cost resource and a budget cost resource.

image

With my new default (as I only created this PWA site this morning) of ‘Never’ for my timephasded data I find that I have no records in any of my timephased data sets.  This is expected!  I’m showing this in Excel – but you’d see the same just browsing to the OData endpoints under …/PWA/_api/ProjectData.  I see 2 projects as it records the Timesheet administrative dummy plan too too – which also accounts for the 7th Task – and the 6 baselines are for the 6 tasks in this project.  The TimeSet just records the days from 1984 to 2149, and will always be 60,631.

image

I’ll switch to Daily in my reporting granularity settings and republish and see what I see…  and now I have some time phased data – and 828 rows in each of the feeds.  This is a record for each day that has values – and in this snippet I see some costs, actual work and budget cost for one of my tasks.

image

Next I’ll take a look and see what weekly looks like…. and this gives me 120 records for each time phased data set.  One record for each task for each week there is something to report.  The records aren’t in exactly the same order – but you can seer the weeks that are totaling 120 and 80 are those with 16 and 24 hours per day.  The dates the time shows against – such as 9/30/2017 – are Saturdays as that was the day I had set in my regional site settings as the First day of the week.  I’ll come back to ‘week of the year’ later.

*** Update 11/30 - there is a slight change that will likely arrive before you see the feature - any roll-up such as weekly/monthly/fiscal periods - will record data against the FINAL date of the period rather than the FIRST.  Currently as of 11/30 you will still be seeing 'FIRST' as per my screenshots ***

*** Update 12/5/2017 - back to Plan A - all aggregations will list against the FIRST date in the period - as per my screenshots ***

image

Next I’ll go to the Monthly option – and that is Monthly set in the reporting time phased granularity page and not using a Fiscal Period of Monthly.  Here I just have 32 records in my time phased data sets (and they are only matching as I just have 1 assignments per task and the same durations for task/assignment) – with each data item being recorded against the 1st of the month – and we get to see all the rows in one view – with the project summary task being at the top.

image

Now stepping on to Fiscal Periods.  Hey – where did my data go!

image

As it mentions in the Reporting configuration page – “Important: Reporting data will only be generated for defined fiscal periods. Also, if you change an existing fiscal period, you will need to republish all projects.” and as I have no defined periods (see above – my FiscalPeriods feed is empty) so I will need to go and create some.  I’ll define 2017 first on its own to stress the point – and I’ll use the 4,5,4 method.

 

image

Now at this point I discovered a bug of sorts – fiscal periods need to be complete years, and the default for the 4,5,4 pattern leave 12/31 out on a limb…  My Reporting (Project Publish) jobs went down a hole trying to work out what their calendar was and after an hour or so they failed.  We are looking in to it and the fix is easy (once you know what caused the problem – I had a few variables in my plan and hadn’t noticed my year finished too soon).  Not sure if we will address this, but now we are making more use of fiscal periods it might make sense to not allow an incomplete year to be defined.  So once you define the 4,5,4 make sure the final period goes to 12/31- unless you are creating 2018 now too in which case you could start that on 12/31 – whatever works for you.  This same issue can be seen if you don’t have Fiscal Periods defined for the complete duration of your plan – so for example in my case as my plan starts in October 2017 and finishes in March 2018 and my Fiscal Periods start in January – I need to create both the 2017 and 2018 Fiscal Period.  I’m getting some confirmations of this, as the reporting page appears to indicate that you will only get data if you have defined the Fiscal Period – which to me meant that I could ‘choose’ not to create my Fiscal Period for 2018 if I was only interested in 2017 at the moment – so my publish would be quicker and I would use less space and reporting would be quicker (though only for 2017).  However, currently projects will fail (after about an hour in my tests) at the Reporting (Project Publish) unless all the Fiscal Periods you need are created.  I hope to hear that this is a bug.

*** Update 11/30 Confirmed as a bug and we have a fix coming - we will not fail if any missing dates or periods exist.  For example if you only defined Fiscal Periods for 2017 and 2019 then a plan from 2017 to 2020 would only publish the details for 2017 and 2019.  Not sure why you ever would - but that is a good way to describe the behavior you will see) ***

Back to the main story – I have my Fiscal Periods for 2017 and 2018 configured – complete with 12/31 for each year so I publish my project:

image

Now I see 31 rows – 1 fewer than my monthly based report as the pattern of the fiscal periods doesn’t quite match up to the monthly boundaries.  I can see too that I have records coming from my Fiscal Period feed – and also see the Fiscal Period UID in my TaskTimePhasedDataSet for each row (Previously I just saw the NULL GUID (0000000000…).

Time to take a look at some of the other fields new with these changes and lets dive into the ‘Projects’ feed.  We can see for the project which time phased granularity we have from the new field ProjectTimePhased – so this confirms that when this project was last published the granularity was set to ‘By Fiscal Period’.  We cannot tell what the fiscal period configuration is, but looking at the TimeSet feed we could deduce that from the dates and fiscal periods.

image

Towards the end of the field list in the Projects feed we have some more new fields – and these relate to workflow and you can read more about these in the support document at https://lwal.me/4f. In my feed these are unpopulated and I am planning another blog posting where I’ll take a deeper look at these, but you will see that we now expose the WorkflowCreatedDate, WorkflowError, WorkflowResponseCode, WorkflowInstanceId, WorkflowOwnerId and WorkflowOwnerName.

image

With this information we hopefully make it much easier to spot any issues with workflows, and you (The PMO or helpdesk) will be able to proactively see what is happening to workflow across all of your projects rather than waiting for a project manager to come across a suspended or failed workflow.  As I said – more on that in a future post.

So in summary the new options for reporting allow you to considerable reduce the amount of data you might need to access – by 25 times in my example, that is assuming that monthly totals is ok for you – or around 1/6 of the data if weekly is your thing.  And the added benefit not just when reporting on the data but each time you publish too!  You can change these at any time, but you would then need to re-publish all of your projects (or at least the ones you wanted to report on) – so you might also want to look at some kind of automated publish option such as using PowerShell or Javascript.  You can search online for various articles on this – one of my favourites would be Paul Mather’s javascript example.

I’d love to hear back on how you find this works for you and which granularity is best in your scenario.  Or do you think you might change?  Maybe run with Never or Monthly and maybe switch to Daily and republish when you need the deeper granularity?

Project Online: Provisioning a new PWA with PowerShell

$
0
0

This post comes from my “You learn something new every day” collection – and after responding to e-mail saying the only way to provision a PWA site in Project Online is through the UI I thought I’d better double check.  I’d also seen a Twitter posting about a new update to the SharePoint Online Management Shell that came out yesterday – so what better time to install it (after uninstalling the previous one) and giving it a test drive.

Here is the download - https://www.microsoft.com/en-us/download/details.aspx?id=35588

There are no specific Project PowerShell commands there – I quickly established that hadn’t changed (gcm *PWA* and gcm *Project*) but what about New-SPOSite?  Maybe if I use the PWA#0 template?  But how would I enable the PWA features (I had my on-premises hat on…).

So this is what happened…  First step – Connect-SPOService – and the Url is the admin page – not just the tenant page:

image

Then I used the New-SPOSite command – with the following parameters:

New-SPOSite -Url https://brismithpjo.sharepoint.com/sites/BlogDemoPS -Owner brismith@brismithpjo
.onmicrosoft.com -StorageQuota 20 -Title BlogPWAFromPowerShell -Template PWA#0

image

Then went over to my Admin Center to take a look:

image

And once the provisioning was complete – what would I find?

image

A working PWA site!  I’m sure plenty reading this are saying “yes, of course” and already knew this – but it had somehow passed me by.  There isn’t unfortunately a way to set permissions mode via PowerShell – as far as I could see anyway – but happy to proven wrong.  It would make a great addition to Set-SPOSite.

I did notice that Set-SPOSite does already have an EnablePWA option – but having executed it against a SharePoint team site it didn’t seem to do what I expected (Add PWA) – so I’ll try and find out more,. as the example on the Set-SPOSite docs page is a little lacking.


Microsoft Planner: New Year–New Features!

$
0
0

This year sees an acceleration in the delivery of new features to Planner and I’m taking the opportunity to walk through some of these and show how one might use them.  The Plan I am using for my examples is one that featured in some of my earlier Planner blogs – and is the Change Management plan for Office 365 – generated and updated daily by an Azure function that reads my Message Center posts and adds them as categorized and labelled tasks – assigned to the person responsible for that workload.  For more information please see Office 365 Message Center to Planner- PowerShell walk-through–Part 2 where I walk through the code.

Groups and Filters

First up are a couple of additions to the ‘Group by’ drop down.  I can now group by due date – to see my late tasks, as well as today’s, this week, next week and future tasks in vertical columns.

A Planner Hub view where the plans are group by due date and showing columns for today, next week and future

Next is group by labels – the coloured fly-outs that let you categorize the tasks.  In my Change Management plan I pulled in the various message center types so I have columns for Action, Plan for Change, Prevent or Fix Issues and a couple of others.  As you can see there will be some tasks that appear more than once – as they were tagged with multiple labels.

A Planner Hub view where the tasks are grouped by the category labels showing columns for the different categories of messages I am pulling from Message Center

Probably my favourite new feature is the Filter option.  This appears top right next to the Group by options and allows you to filter down to specific tasks – and there are some built in filters of the same due date options seen in the above view, or any of the labels, or the assigned resources of free text.  In this next screenshot I have used a combination of these – to show me any tasks that contain the word “New” (entered in the top of the dialog – it says keyword before you type anything) that are also due in the future or have no date, and are labelled as Advisory or Awareness.  You can see in the header my filter now has a (5) next to it as a reminder that filters are set – and while the dialog is open I also see counters for the selected options (Due (2) and Label (2)).  This works dynamically – filtering as you make new selections.

A filtered view of my Planner tasks using a keyword filter of new as well as some of the built in date and label filters

Schedule View

The Schedule view is really cool and can be found in the Plan header – alongside your Board and Charts tabs.  It isn’t in the screenshots above as the feature hadn’t quite hit production as I was writing this – so this example uses a different plan in one of our testing rings.  Any plan with a start and end date will display showing the full duration – in red here for ones that have started and in white for ones yet to commence.  If the plan just has a due date then it just displays on that day.  You can also see unscheduled tasks displayed on the right hand side.

My Planner tasks in a schedule view with plans showing on a calendar based on their start and due dates

You can also drag and drop – so here I have dragged my iCal Feature task to 2/2 – and it sets the due date for me.

image

You can also drag the right edge of the task to set a different due date – and have the start date set at the same time.  The schedule view also obeys any filter or group settings – so this next shot shows that I have a filter (I just entered the word ‘Data’) and I’ve changed the grouping to Assigned to.

A schedule view with filter for keyword data and group by assigned to added

As I was finalizing this posting the Schedule feature went live in my tenant – so I decided to include the following schedule view from my O365 Change Management plan – but keep the ones above too as they show well tasks that have a start and finish date.  In the following shot you can see the due dates fro several message center items and you will see that MC124413 shows up twice.  This is somewhat intentional as my Azure Function for copying in the messages to my plan will duplicate into multiple buckets if the title hits multiple bucket terms – and in this case it is in my Yammer and Updated buckets.

The schedule view applied to my message center plan showing when the various message center posts need to be actioned

iCal Integration

The iCal integration will be coming along a little bit later and the option will be under the elipses next to the Schedule option in the plan header.  If you select to “Add plan to Outlook calendar” it will pop up a dialog allowing you to choose to publish the plan to an iCalendar feed – and then add to Outlook.  The plan then gets added to your Outlook as an additional calendar so you can track your plan deliverables.  I’ll add some screenshots once that feature is live in the tenant with my O365 Change Management plan – just for completeness

The menu from the elipses where you can choose to add plan to Outlook calendarThe dialog for selecting to publish an iCal version of the Planner tasks

The sharing of calendars will also be controllable at the tenant level via PowerShell configuration settings - see https://go.microsoft.com/fwlink/?linkid=867418

Be sure to visit the full support documentation for Planner at https://go.microsoft.com/fwlink/p/?LinkId=703808 and specific information for the calendaring features at https://go.microsoft.com/fwlink/?linkid=867416

And finally…

I think this one slipped out a while back – without much of a fanfare – but you can now copy tasks in Planner by clicking the ellipses on the tasks and selecting “Copy task”.  This will then allow you to choose which items you want to be in the copy, and to edit the task name – then when you click Copy you will have a new task.  It will be in the same bucket from which it was copied – but obviously you can move it wherever you need it in the plan.  Don’t forget if this isn’t quite what you need that I posted a blog example of how you might use Graph to clone a whole plan - https://blogs.msdn.microsoft.com/brismith/2017/02/17/microsoft-planner-how-to-clone-a-plan-with-graph/.

Menu where you can choose to copy a task Dialog where you choose how you want to copy an task and which elements of the task should be copied

Microsoft Planner: When is a plan not a plan?

$
0
0

This posting came about from a customer query where they were not seeing all the plans in their mobile clients that they see on their Planner Hub on the web.  And the answer?  When it is a Group – that doesn’t yet have a Plan.  That is one answer anyway, and there could be some others which I will come to at the end.

Groups are getting to be ubiquitous across Microsoft products – Teams, Power BI, Planner, Outlook , Yammer - and I’m sure there are more – all using Groups and the additional features that Groups brings along.  One of these features is Planner – and when you create a Group you get a Plan, no matter where the Group is created from.  As an example let’s take Yammer.  Office 365 Connected Yammer groups are now a thing – and can be used once you set some specific security options in your Yammer Network Settings.  First enforcing Office 365 identity and then you will see that Office 365 Connected Yammer Groups are enabled.

Yammer Network Settings page showing configuration of Office 365 Connected Yammer Groups

When a new Yammer group is created (I’ve called my example “AYammerUnifiedGroup”) a Group gets created too:

New Yammer Group creation

On the Yammer page for AYammerUnifiedGroup you will see links to Office 365 resources of a SharePoint Document Library, SharePoint Site, OneNote and, of course, Planner!

The Yammer page for my new group

If we jump over to our Planner Hub now we can look under More Plans in the left hand navigation – and there we also see AYammerUnifiedGroup.

A view of Planner showing More plans

How about our mobile client?  Hmm – can’t see it here…

Planner on Android

So, why don’t I see it on my Android phone yet?  Lets take a look at the Group.  We can get the Group ID from various places – and I found it by copying the hyperlink for the Planner link on the Yammer page - https://tasks.office.com/d740ddf6-53e6-4eb1-907e-34facc13f08b/Home/Group/fe103374-b2d4-4c57-a9ed-187008ea70b8?auth_pvr=OrgId&auth_upn=BriSmith@BriSMithPJO.onmicrosoft.com – and my Group is fe103374-b2d4-4c57-a9ed-187008ea70b8  (I didn’t click on the link at this time – and you will see why shortly)  I can plug this in to Graph Explorer and see more details about the Group.

Graph Explorer

Then by adding /Planner/Plans I can see all the plans that are ‘owned’ by that Group – and I can see there are….. Zero.

Graph Explorer - Plans

What is happening here is that the ‘More Plans’ list is really More Groups – and they will not all have Plans, yet at least.  If I click on my AYammerUnifiedGroup under the More Plans list then it takes me into my Plan – and actually creates it at that moment.  Now I have a plan!

Planner and my AYammerUnifiedGroup plan

I can go back to Graph Explorer and make my query again – and now I see my Plan there too.

Graph Explorer

Refresh my Planner Hub view on my Android App and the Plan is there too.

Planner App on Android

I was thinking of calling this blog post Schrödinger's Plan – as you don’t know if you have a plan or not until you look – but then once you look you will have a plan.

Getting back to other reasons – the product group for Planner are always revisiting the API they use for populating lists of Groups/Plans as well as pulling the Plans themselves, to try and ensure that you are seeing the plans you need at the right time – so things may change here, and there could be other reasons you do not see what you expect.  A recent example – you can use the Azure Portal to manage Groups – and it is possible to add an Owner for a Group but not make them a member.  Planner tends to look for the members – so being just an owner might give you some problems.  I’m sure there could be other cases like this – but the main point of the blog was to help you understand the flow happening when Groups get created and when the actual plan exists – as well as showing how Graph Explorer is a useful tool for looking behind the scenes at Groups and Plans.  If I had clicked the Planner link from Yammer when getting the Group ID from the Url – then that action would have caused my plan to be created too.

No cat was harmed during the writing of this blog post.

Project Online: Reporting and Portfolio Analysis

$
0
0

The new reporting options that are rolling out to Project Online customers allow you to choose how you want your time phased data and give you control of the time buckets you use – and you can choose not to have any at all.  My favourite is to use Fiscal Periods as that also limits the timescale of data that gets published – so reducing the data footprint and publish and reporting times by making sure you have just the data you need.  For example I could choose just to have fiscal periods defined for 2018 and 2019 and wouldn’t need to worry about my 15 year long mega-project – as the data after 2019 wouldn’t be clogging up my system.  But the blog title mentions Portfolio Analysis?  Where does that fit in?  I’ll explain…

When you are creating a new Analysis you can define the time period and granularity – but the granularity is limited to Calendar Months or Calendar Quarters:

Planning horizon options for portfolio analysis are just calendar months or calendar quarters

And as I have checked the option to analyze project resource requirements against capacity the next page in the setup of the analysis will check if I have resource requirements – and give me a red banner if it finds none.

Portfolio showing no resource requirements present

Often that message will just mean you haven’t published the projects yet – but I know mine are all up to date.  What happened?  Let’s take a look at my Reporting options in Server Settings:

Project Online reporting options showing never as the chosen option

Yes, that will do it!  Having Never set for my reporting options means I am feeding no time phased data through to reporting – and the Portfolio Analysis actually uses that data for its resource requirements.  Which option should I choose?  Any of the others will ‘work’ in that they will show you have requirements – but some will give you the results you want – others less so.  I’ll start with a bad choice so that you can understand how this all fits together.  I’ll go for Fiscal Periods – and I have defined my Fiscal Periods as 4,4,5 (Each quarter split into 4,4, and then 5 weeks).  I was going to make a case for 2,3,5 being a better formation – but I’m guessing not too many of my US readers would get the reference.

Fiscal period definition for 2018 showing a 4,4,5 pattern of weeks including starting dates of each period

Now I need to publish all of my projects that are going into the Analysis and build my Portfolio Analysis again.  I’ve already built an analysis on the same data set in my Project Server 2013 system and I’d expect my requirement details to look exactly like this – as I have exactly the same data.  Here is my Project Server 2013 analysis and I have enough resources to do 3 of my 5 plans.  If you are not familiar with the tool my availability for the different roles is shown in the top grid – then my requirments are listed in the bottom grid – along with the priority of the plans based on my pair-wise comparison to other projects and weighted by the driver priority.

Requirement details view showing 3 projects selected and 2 not selected based on available resources

How does this compare to my Project Online Analysis – where I have my reporting using my 4,4,5 Fiscal Periods?  A quick aside – in Project Online we have a bug where we show you the Enterprise Global (EGlobal) as one of the projects you can select for the analysis – please don’t as it will break your analysis.  I have all the same projects and drivers – same resources and used Calendar Months from March 2018 to March 2019 – so it should be the same as the analysis above shouldn't it?  Well if it was I wouldn’t be blogging…

A very different picture…

Resource requirements showing only 1 out of 5 plans selected based on false requirments from the fiscal period settings used

My analysis say I can only resource a single project – and it looks like I have no requirements at all for resources in March, June and September – but April and July look crazy!  January to March are also showing no requirements and I’ll deal with that first.  My Fiscal Periods were only defined for 2018 – so I publish no timephased data for 2019 – so that makes sense.

Fiscal periods statement that only time phased data for defined periods will be generated

But March, June and September?  If I look at the first rows from my time phased data from the OData feed for AssignmentTimephasedDataset this gives me a clue – I see data for 2/26, 4/2, 4/30… and these are the start dates of my Fiscal Periods.

OData feed from Project Online showing no period start dates in March

Looking at my Fiscal Period Definitions again I can confirm that none of the start dates are in March, June or September.  That explains why I see no requirements.  But April July and October (and January) each have two of the start dates – so in effect the choosing of the 4,4,5 pattern has doubled up my requirements in some months and isn’t registering at all in others.  This is the extreme example – but you will get a similar but not so pronounced effect if you chose weekly because some calendar months will contain more ‘week start dates’ than others.  The new reporting options will always show the sum time phased data for the period against the start date of that period.  The effect this has on Portfolio Analysis is one thing you need to take into account when making your choice.  So what would work?  Daily certainly would – but you wouldn’t gain any benefit in terms of faster publishing and reporting.  Monthly would also work just fine – as the start date of the period is the start date of the month – and this give good publish improvements over daily.  For my money the Fiscal Period option is the best – but with Standard calendar year as the model.

Fiscal period definition with standard calendar year chosen

With this new Fiscal Period defined and reporting configured to use Fiscal Periods I’ll re-publish my plans and build my analysis again.

A matching portfolio analysis to the first one shown - except no data for 2019

Perfect!  Matches up to my 2013 one – except for the 2019 data which I excluded on purpose by not setting the 2019 fiscal period yet.  I hope you like the new reporting options and if you are also using Portfolio Analysis I hope this helps you make the right choice!

Project Online: Why might my site go read-only?

$
0
0

We are rolling out a change currently that could set some PWA sites to read-only – so I thought it worth mentioning 3 reasons why you might see a PWA site go into a read only mode – which would show as a yellow banner on the PWA pages.  I’ll start with the most recent change.

We have seen a number of customers open cases where their Project Online PWA site appeared to just go away.  Closer examination showed that the customer did not have either a Project Online Professional or Premium license – but just some Project Online Essentials licenses.  I blogged about this combination a while ago.  Normally when a license expires then the site will go into a read only state 30 days later – then after a further 90 days the site will be de-provisioned.  The  challenge here was that there were still valid licenses (the Project Essentials ones) and the sites were not showing as read-only – but would still finally get de-provisioned.  The tenant and billing administrators would receive e-mails warning of the pending de-provision but it appears these where ignored in some cases.  The change means that any sites where the only valid license is Project Essentials will now go to read-only after 30 days (or immediately if the Project Online Professional or Premium licenses expired more than 30 days ago).  Many of the customers who have had this issue appear to be from the Education sector – and it seems to coincide with the change in offering where earlier the EDU customers had a free option for Project Online – but now have a reduced price offering – and some renewed with the cheapest license available and did not choose any Professional or Premium licenses.  I should remind everyone that not only do you require at least one Project Online Professional or Premium license to keep your PWA alive – but you also need these licenses to perform administrative and other regular tasks that many of you users might need to do – so please ensure you are correctly licensed.  See https://products.office.com/en-us/project/compare-microsoft-project-management-software?tab=1 for details of the different licenses and their features.

The 2nd reason you might go into a read-only state is if you are over your assigned quota.  I’ll be following up with another blog that goes into more details here and offers some ways to ensure you are not using more data than you need to – and to avoid the risk of hitting your quota – and I’ll add a link here when that posting is live.

The 3rd is usually a very temporary condition that can occur during maintenance.  Most maintenance is not even noticed by our customers and you will not often see this – but if for example we are failing over a SQL Server or migrating a tenant for load balancing our server farms there can be times when short periods of read-only are experienced.

And the 4th?  Just in case you came here from my click-bait headline?  If the planet runs out of 0’s and 1’s then I guess PWA would go read-only.  I said you wouldn't believe it!

Microsoft Planner: Where did my New plan option go?

$
0
0

This one should only affect administrators, but the behavior will also help explain why other users may not see the New plan option.  So what does this look like?  When Planner loads initially as it loads the New plan option can be seen – just above the Planner Hub option:

Planner screen while New plan is still visible

But as the page finishes loading a few calls will have been made to check tenant settings and group memberships – and the UI will be trimmed and New plan option removed:

Planner screen fully rendered and New plan trimmed

This was a recent change due the the work we are doing in Planner to support Guest access – and we are trimming the UI so that people who are not allowed to create Groups do not see the New plan option in Planner.  The article at https://support.office.com/en-us/article/manage-who-can-create-office-365-groups-4c46c8cb-17d0-44b5-9776-005fced8e618 explains how to control Group creation and whether Guests can create groups too.  To control other users you can set a setting at the tenant level – EnableGroupCreation = False, and then create a security group populated by all the users who you are allowing to create Groups.  The GUID for this group is set against the property GroupCreationAllowedGroupId.

Where this comes unstuck is if you have configured this option but have not included the admins in the group allowed to create Groups.  Admins bypass this control and can always create Groups – but our checks don’t find them in the group and trim the UI.  In Planner there is no way to determine if the current user is an admin.

A couple of workarounds – firstly if you are quick then you can click on New plan before it goes away and you will be able to create a plan (admins only – if the UI is being trimmed because you can’t create Groups then you will still not be able to create Groups.  The better solution is to add all of your admins to the group you are using to control group creation – then they will not get trimmed UI.

For the inquisitive amongst you take a look at the F12 Developer tools in your browser of choice and you can see the calls that are getting the data to make the trimming decision.

GetCurrentTenantSettings returns the settings to know if how EnableGroupCreation is set (False in this case) and also the group used to control who can create Groups (0ff9c27d-47f3-4d19-b39a-695c8e8ae9d1

{"AllowGuestsToAccessGroups":true,"AllowGuestsToBeGroupOwner":false,"AllowToAddGuests":true,"ClassificationDescriptions":[],"ClassificationList":["Low","Medium","High"],"CustomBlockedWordsList":[],"DefaultClassification":"","EnableGroupCreation":false,"EnableMSStandardBlockedWords":false,"GroupCreationAllowedGroupId":"0ff9c27d-47f3-4d19-b39a-695c8e8ae9d1","GuestUsageGuidelinesUrl":"","PrefixSuffixNamingRequirement":"","UsageGuidelinesUrl":”Http://aka.ms/o365g”}

Once we have this GroupId we can check if I cam a member using CheckCurrentUserToGroupMemberships – passing in the GroupId.  This will return an array of the Groups that were passed in of which I am a member (This time there was only one group – but the same call can be used to test many groups)

"{\"@odata.context\":\"https://graph.microsoft.com/beta/$metadata#Collection(Edm.String)\",\"value\":[]}"

The empty array ([]) confirms that I am not a member so the UI gets trimmed!

Microsoft Planner and Guest Access: What you need to know

$
0
0

The new guest access feature for Microsoft Planner is now enabled for all farms - hopefully you saw the Message Center post!

Message Center post as a Task in Planner

This Planner task version of the message center post is courtesy of my Office 365 Change Management example using Azure Functions.  There was also a Tech Community blog post by Jo Parkhurst – https://techcommunity.microsoft.com/t5/Planner-Blog/Bring-your-plans-to-life-with-guests-in-Planner/ba-p/190704.  In this blog post through I’ll walk through some of the aspects of using Guests in Planner.

As you can see from the screen capture above I have added a guest user to my plan – the LU icon – lunchwithalens@gmail.com – which also happens to be me – so not a great example of collaboration – but you get the idea.  We don’t yet have the ability directly in Planner to add a guest (coming soon) – but we can assign guests to tasks for guests who are already in the tenant, having been added through Teams for example (ignore the ‘2’ – I forgot to capture a screenshot when I first added me as a guest):

Screen showing adding guest member to Teams

The important thing to remember here is that we are not federating a login with Google – and it actually creates an account in your Office 365 tenant which needs its own password – not your Google (or whatever) password – and please don’t re-use you Google password – very bad practice.  The person you invite will see an e-mail (in this case in Gmail) – first verifying the e-mail address, and once you do, then you will receive an invite to Teams (in this case):

E-mail sent to guests for verification

Welcome e-mail for guests from Teams

Then your guest has access to Teams and can start collaborating:

Conversation with a guest in Teams

Once your guest has access they will appear to be used elsewhere – such as assigning to tasks in Planner:

Assigning a guest in Planner

One thing that may catch your guest out is the log-in Url they need to use – and for Planner just going to https://tasks.office.com will NOT work as that Url does not know which tenant you are attempting to log in to – and only accepts “work or school accounts”.

Invalid log in at tasks.office.com

The log-in Url needs to identify the tenant – so the full Url would look something like this:

https://tasks.office.com/brismithpjo.onmicrosoft.com

where brismithpjo is my Office 365 tenant.

Valid login by adding tenant after tasks.office.com

And that one will work – and the guest sees the plans and groups of which they are members.

Guests view in Planner with all their Plans and Groups

There are some limitations on what guests can do in Planner, some based around security (can’t create plans or add other guests for example – and can’t join plans themselves) and others are more related to their limited tenant capabilities (no mailbox) so they cannot comment on a task.  Full details can be found on the support article at https://support.office.com/en-us/article/guest-access-in-microsoft-planner-cc5d7f96-dced-4da4-ab62-08c72d9759c6.

Guest accounts will be listed in the Office Admin center under Users, Guests – as well as in Azure Active Directory:

Azure Active Directory view showing my guest user

Project Online: Setting a Status Manager with CSOM

$
0
0

Just a quick post today to share something that might help scenarios where you need to change a status manager for a task.  As you may well know, it isn't possible to just select anyone to be a status manager for a task – and even in the client application you don’t see people in the Status Manager drop down unless they are already a status manager in the plan – and they only get there by having opened the plan and making assignments (and hence being the status manager for the new task/assignment).  Hopefully there will be a broader change coming – but a recent change in Project Online does bring this a step closer.

You still (as of 5/25) can’t set just anyone programmatically as a status manager using CSOM – they have to be ‘on the list’ – so will need to have opened the plan as mentioned above – but you can set anyone to be an owner (permissions allowing of course).  And guess what?  That gets them on the list!  So if you set the person who you want to be a status manager to be the owner of the plan – you can then set them as a status manager!  You’d probably then want to set the original owner back again.

In code here is a snippet that shows how you might approach this (I don’t set the owner back here – so if you wanted to do that you’d need to persist the actual owner).  This sample could be added as a class in the Github sample at https://github.com/OfficeDev/Project-Samples/tree/master/Create-Update-Project-Samples.  For brevity I’ve stripped out some error checking.

public static void ReadAndUpdateStatusManager()
{
// Load csom context
context = GetContext(pwaInstanceUrl);


            // Retrieve publish project named "New Project"
// if you know the Guid of project, you can just call context.Projects.GetByGuid()
csom.PublishedProject project = GetProjectByName(projectName, context);
csom.DraftProject draft = project.CheckOut();


            draft.Context.Load(draft.Owner);
draft.Context.ExecuteQuery();
draft.Context.Load(draft.Tasks);
draft.Context.ExecuteQuery();


            foreach (var task in draft.Tasks)
{
draft.Context.Load(task.StatusManager);
}


            draft.Context.ExecuteQuery();

// We get the user we want to be our status manager


            User StatusManager = context.Web.SiteUsers.GetByLoginName("i:0#.f|membership|<username goes here>@<and tenant here>.onmicrosoft.com");
context.Load(StatusManager);
context.ExecuteQuery();

// And set our user as the Owner


            draft.Owner = StatusManager;

            foreach (var task in draft.Tasks)
{
try
{
if (!task.IsSummary)
{

// Then we can set our new Owner as a Status Manager
task.StatusManager = draft.Owner;
}
}
catch (Exception ex)
{
Console.Write(string.Format("Error setting Status Manager:", ex.Message));
}
}

// Publish and check in the project
csom.JobState jobState = context.WaitForQueue(draft.Publish(true), DEFAULTTIMEOUTSECONDS);
JobStateLog(jobState, "Updating project");
}


Office 365: New Reader role for the Message Center

$
0
0

Regular readers will know that the message center is somewhere that I feel we make it too hard for our customers reach – and the people who could get there (admins) may not fully understand the impact of some of the issues and do not pass on to the people who need to know.  This was behind my example of using Azure Functions to read from the message center and write out to Planner tasks (covered recently too at the Project Virtual Conference) so I was particularly happy to see (ironically on the message center) that a new role is rolling out (pun intended) that will allow Office 365 users to be made Message Center Readers.  Very cool!

MC140981 Message - new reader role for Office 365 Message Center

This is rolling out and the expectation is that it should be complete later this month (June 2018).  Please speak with your Office 365 admin and make sure someone in your PMO is made a reader and can see the messages pertinent for Project, Planner, SharePoint or any other workloads you are interested in.

Project, Project Online and Planner Accessibility

$
0
0

The Project and Planner engineering teams have been hard at work improving the accessibility of our products.  A set of documents is now live that walks through some of the screen reading functionality that has been added to help users relying on Narrator or Jaws to navigate Project and Planner – as well as other accessibility features .  The landing pages for Project Web App in Project Online, the desktop client and Microsoft Planner can be found at:

Accessibility support for Project web app

Accessibility support for Project desktop

Accessibility support for Microsoft Planner

The screen reader can tell you where you are on the screen – and for example if you are in the schedule web part it tells you which column and row you are on – and if the field is editable or read only.  Similarly if you are in the desktop client it identifies row and column and the edit options – such as knowing that the duration columns has a spinner control.

Keyboard shortcuts are listed both for the web and desktop client – and worth remembering Alt-q which takes you directly to the Tell Me dialog where you can search for commands and execute them right from the dialog – as an example here I am getting Tell Me to give me the Scroll to Task option rather than hunting it down:

Screen shot showing Scroll to Task in the Tell Me dialog - surfaced via alt-q

For Planner the various screen elements are described which will help the Narrator user to understand the layout – and details of the keyboard controls are given too.

All the articles also include links to both the Microsoft Disability Answer Desk and for our government, commercial and enterprise customers  the enterprise Disability Answer Desk – where links to all Microsoft’s accessibility resources can be found

A Flow Custom Connector to read O365 Communications API

$
0
0

A while back I wrote a blog post walking through how one could use Azure Functions, The Office365 Communications API and Microsoft Planner to help with change management in Office 365.  The post was pretty popular – over 10k visits – which is good for my blog – but the consensus was that it was to hard to implement.  Not everyone wants to get deep and dirty with Azure functions (even if I did just use the PowerShell variety!).  The right answer for this problem should always have been Flow and I’m still surprised that we haven’t got an out of the box connector.  I (finally) found a little time – so had a go at creating a custom connector for Flow that would read all of the Messages, both Message Center and Incident – and then you could do with them what you wanted!

Stepping back – the following shows the result of my previous endeavors – and I won’t quite be getting that far with this blog post – and I’ll point out the areas that would still need to be addressed.

Screen shot of a Planner plan with O365 Message Center content

The first step is to create our connector – and as this is going to read the Message Center – you need to be an admin – and you will also need to create an App Registration for the connector – so that needs you to be an admin too (or to know one who will work with you to make this happen!).

App Registration

In Azure Active Directory admin center choose App registrations (or the Preview, which I will be using) and click New registration.  You can call it what you like – mine is O365MC – I just want this for my Contoso users and it is a Web app – leave the redirect blank and click Register.

AAD Register an application page - with name set, restricted to Contoso and a Web app

The next stage is permissions – so I click View API Permissions (I’m hiding my Application (client) ID and Directory (tenant) ID – you will need to use your own.

image

Next click on Add a permission – then select Office 365 Management APIs

Choosing to add permissions Choosing the right type of API

– and choose Delegated permission on the next pane – and I’m only interested in Service Health (which includes Message Center) so have selected just that one option and clicked Add permission.

Selecting permisisons

The final permission step is to Grant admin consent – so it should look like this:

Adding the right permissions for ServiceHealth read

Next we need a key for our application so that our connector can prove who it is.  Certificates and Keys can be found in the left navigation with your API registration selected. Clicking New client secret creates the key.  I chose 2 years for expiry.  Copy it from here as it won’t give you a second chance.

Adding a client secret We should be good to go now – so we can proceed with our connector.

Custom Connector

Go to https://flow,microsoft.com and then to Data in the left navigation and Custom Connectors.  Click Create custom connector to get started.  I’m creating from Blank – if you know what Postman and OpenAPI are then feel free to take another approach.  I used Postman and there are some good explanation articles around but none specific to these APIs.  Give your connector a name – mine is O365 Message Center Connector and then click continue.  The 1st page of 4 then appears – General information

image

The important pieces here are the Host - manage.office.com and the Base URL - /api/v1.0/contoso.com/ServiceComms/Messages.  Feel free to add an icon, colour and description – then click forward to Security, and select OAuth 2.0 from the initial option – then Azure Active Directory from the drop down at the top of the OAuth 2.0 form.

image

I’ve filled in the Resource URL – https://manage.office.com, and the other fields can be set with the data from your App registration.

Client ID is the Application (Client) ID.   Client secret is the key you generated.  The rest leave as is, and advance to Definition.  For now we will just enter the General information and the Request, by selecting import from sample and then just selecting GET and entering a URL.

image

The full URL is of the form - https://manage.office.com/api/v1.0/<your domain here>/ServiceComms/Messages/api/v1.0/<your tenant name here>.onmicrosoft.com/ServiceComms/Messages

The request is easiest defined by using the json of a returned request as a sample – and if you scroll to the end I have added a sample json you could use.  Just clich add default response – paste in the json and click import.

Importing response json

To see what this did – you can click the ‘default’ button and it will show what it stripped out of the response.  This was a Message Center Response but this could also be used for Incidents.  There are things you can do on this page like hide some of these – or make them ‘important’ (they show up first) – but just for playing around you can leave it at that point.

Response selections

We are nearly there – but we also need to enter the Reply URl back in our App permissions – this was added as the Redirect URL on the 2. Security page once we saved that page.  Click the page icon to the right of the URL to copy.

Getting redirect URL for App reply URL

Once you have it – back to your App Registration and add a redirect Url – then save

Redirect URL setting

OK, now for the exciting part – Create and Test!  Back to your CustomConnector and click Create connector up at the top right and then click Test – the 4th stage.  We first need to add a connection – so click the + New connection

Adding Connection

This will prompt you to log in – then you have a new active option – Test operation – so go ahead and click it and see what happens?  Hopefully OK (200)

Response showing things worked

So that is working!  Next we can try it in a Flow, and I’m keeping it simple with a schedule to start things off – and run once a day – then this schedule triggers my next thing – which is my custom connector (find it under custom).  I then have an Apply to each – automatic as it detects there will be multiple ‘values’ (Items).  To keep things simple I’m just interested in Project Online items – so I filter Title for ‘Project’ – and then I create a new task in an existing Plan – and add the ID to the Title – then update the Task Details with the actual message.  I added the dates for good measure.  I wasn't intending this as a Flow tutorial – but hopefully you can follow from this screenshot.

Screenshot of Flow

After this Flow runs I have two new Tasks – and Task details too!

Plan with MessagesTask details with message center message

To get the same features as my Azure Functions example you’d have to have a much more complex Flow – updating Tasks that already exist – copying in the External Links too – but making sure they were not NULL (@empty is your friend) – as well as adding different assignees based on the Task Title and putting in a differen tbucket.  All perfectly possible – but beyond the scope of this posting which was really about getting connected to the Message Center.  Enjoy!

And this is the json to copy in for the response sample.

{

"@odata.context": "https://office365servicecomms-prod.cloudapp.net/api/v1.0/d740ddf6-53e6-4eb1-907e-34facc13f08b/$metadata#Messages",

"value": [

{

"@odata.type": "#Microsoft.Office365ServiceComms.ExposedContracts.MessageCenterMessage",

"AffectedWorkloadDisplayNames": [

"SharePoint Online"

],

"AffectedWorkloadNames": [

"SharePoint"

],

"Status": "",

"Workload": null,

"WorkloadDisplayName": null,

"ActionType": "Action",

"AffectedTenantCount": 0,

"AffectedUserCount": null,

"Classification": "Advisory",

"EndTime": "2018-10-30T23:59:00Z",

"Feature": null,

"FeatureDisplayName": null,

"Id": "MC120728",

"ImpactDescription": null,

"LastUpdatedTime": "2017-09-25T16:15:21Z",

"MessageType": "MessageCenter",

"Messages": [

{

"MessageText": "Beginning September 30, 2018, Visio Web Access (Visio Service) and its Web Part for SharePoint Online will no longer be available. Instead of Visio Web Access you will be able to use Visio Online and to migrate your organization’s web parts to a newer experience with the new Javascript (JS) APIs for Visio Online. Visio Online enables high fidelity viewing, sharing, and collaboration in your favorite browser, without installing the client for all Office 365 licenses. It supports embedding Visio diagrams in SharePoint Online using a modern file viewer web part and with IFrame along with JS API programmability.  \n \n[How does this affect me?]  \nBeginning September 30, 2018, users in your organization will only be able to view Visio diagrams in their browser, but not create or edit. Visio Online viewing is available to most Office 365 subscriptions. \n\n[What do I need to do to prepare for this change?]  \nInstead of using Visio Web Access (Visio Service) and its Web Part for SharePoint Online, we recommend using either: \n- Visio Online and the Document Web Part [Document Web part instead of Visio Web part] \n- iFrame in SharePoint Online for Visio Web Part [iFrame with Visio Online APIs instead of Visio Web Part with Visio Services JS APIs] \n \nPlease click Additional Information to learn more.",

"PublishedTime": "2017-09-25T16:15:21Z"

}

],

"PostIncidentDocumentUrl": null,

"Severity": "High",

"StartTime": "2017-09-25T16:15:21Z",

"Title": "We're removing Visio Web Access from SharePoint Online",

"ActionRequiredByDate": "2018-09-30T23:59:00Z",

"AnnouncementId": 0,

"Category": "Plan For Change",

"ExternalLink": "https://go.microsoft.com/fwlink/?linkid=858892",

"IsDismissed": false,

"IsRead": false,

"IsMajorChange": false,

"PreviewDuration": 30,

"AppliesTo": "",

"MilestoneDate": "2017-09-25T16:06:43Z",

"Milestone": "",

"BlogLink": "",

"HelpLink": "",

"FlightName": "",

"FeatureName": ""

}

]

}

Project Online: Getting Started with Roadmap

$
0
0

As we start deploying the various pieces of infrastructure needed to support Roadmap you may see some different things in your Office 365 Tenant – so here is a quick explanation of the different parts and what they mean – and the most asked question – how to turn it on!.

TL;DR – Just go to Office 365 Admin, Settings, Services & add-ins and look for Project Online – then see if Roadmap is available for you to turn on yet.  If you don’t see Project Online listed then go to https://project.microsoft.com and hitting that page should trigger the back-fill that will make Project Online appear in the Services & add-ins page (although that doesn’t mean Roadmap is ready for you yet).  More info at https://docs.microsoft.com/en-us/projectonline/turn-roadmap-on-or-off

One thing that people have noticed is the new service plans that show up under Project Online Professional and Premium.

Service plans under Project Online Premium

The top 3 are new, and are used behind the scenes for the various parts of Roadmap – Common Data Service for Apps (CDS), Flow for Project – and P3 which you can think of as the Project Online equivalent to E3.  These would have appeared around the end of November – but do not mean that Roadmap is deployed – just some of the pieces we needed in place first.  Along with the CDS comes a new App link to Dynamics 365 – even if you didn’t previously have any Dynamics subscriptions.

Starting today, December 7th 2018, we turned on Roadmap, but just for a small percentage of tenants initially.  The tenant administrator can check if it is available yet for your tenant by going to Office 365 Admin Center - https://admin.microsoft.com/AdminPortal/Home#/homepage and then the Settings option and the Service & add-ins option under that.

O365 Admin Settings - Services & add-ins

If Project Online isn’t shown it could be that no one has visited https://project.microsoft.com (the new Home experience) recently – as that page is used to trigger the back-fill of other Roadmap requirements – so go to that page – then look again at the admin center.

*** Update 12/10 - we also have a current issue that the preview Admin Center isn't showing Project Online - so switch to the classic version - we should have this fixed shortly ***

If Project Online is shown then you can click that option and another pane opens with a switch to turn Roadmap on.  If Roadmap isn’t ready for you then you will see the following:

Roadmap pane - Roadmap not ready

If Roadmap is ready then you will see the same – without the red note – and you can slide the switch and Save to turn Roadmap on – after reading the mentioned article of course!.

Roadmap pane - Roadmap ready to switch on   Roadmap pane - Roadmap switched on

One you have switched on then you can navigate back to https://project.microsoft.com – or hit the Project icon in the App Launcher – and you will land on the Home page – but it will have a new super power – creating Roadmaps!

Home page with new new Roadmap feature

For further help on Roadmaps you can see our new videos on the Welcome to Roadmap landing page - https://support.office.com/en-us/article/video-welcome-to-roadmap-57764149-51b8-468f-a50d-9ea6a4fd835a.  There will be even more videos arriving from December 10th 2018 onwards.

If Roadmap isn’t ready it could be that we are still not fully rolled out – or that your tenant needs to be updated to CDS 2.0 and we should have more information on that soon.

Enjoy!

A new Blog home for Project Support!

$
0
0

You may have noticed recently that the Project Support blog on TechNet has gone away – and you may even have seen an “Oops! That page can’t be found.” message.  The good news is that the content is mostly moved over to a brand new Blog at https://techcommunity.microsoft.com/t5/Project-Support-Blog/bg-p/ProjectSupport.  Be warned that the migration will have left us with some bad links in the migrated posts and I will try and fix the more obvious ones by making some edits – but for now you may need to search again on the title from the Url and all should be good once our SEO kicks in.  In an ideal world I would have also updated some of the articles at least mentioning that the topic still applies to the more recent releases – but time didn’t allow for that.  We will also be getting up to date with the recent updates in February and March 2019 now we are migrated – and the full pages of updates for each release should also appear over the next few days (You might find a cached version through your search engine in the meantime).

image

The MSDN blog (the BriSmith one) will also be migrating in the next few months, and content will be split between this blog, the Project one at https://techcommunity.microsoft.com/t5/Project-Blog/bg-p/ProjectBlog and the Planner blog at https://techcommunity.microsoft.com/t5/Planner-Blog/bg-p/PlannerBlog depending on the topic.

In deciding which posts to migrate I looked at activity as well as the dates, but there were still some older posts that had valuable information for that very precise issue that not many people might hit – but will be glad to find the post if they do!  If one of your favourite links appears to have not been lost and not in the new blog then feel free to contact me – I can probably find it.  Also another casualy of the move were the great comments that many of you may have contributed over the years.  I do plan on seeing if I can incorporate some of best of these in the body of the appropriate post.

 

Viewing all 200 articles
Browse latest View live


<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>