Boost Performance DEF Sync Call

The feeling you get something live on which you poured hours and hours of work and sweat, priceless! But, what if you are not happy completely? Something is curbing the feeling of being happy a notch or so down? This is exactly what happened to me when I pushed the whole deal of using Tenant Service live. You can read more about how we used Tenant service on my previous blog here

It was just such a relief! Launch was super smooth, I honestly did not expect it to be. Was prepared to be a warrior to fight with any issues that would come by way. But, I was a lucky warrior who simply had to pose with the armor on and no real fight lol

Performance of the form submission after adding ‘Trigger DEF Sync Pipeline’ was bitter. It was taking on average around 8 – 11 seconds for form submission to be successful. Upon more research, turns out custom form submit actions are synchronous, they run one after the other before friendly thank you page or note is shown to end user.

Other way I could think of is to make the most expensive pipeline step Asynchronous. It was easy to point the culprit in my case, it was the step responsible to submit the form submission information to Salesforce that was adding all the time for processing. I logged in a ticket with Sitecore to understand how exactly I can transform a pipeline step to be asynchronous. They suggested that I do something like below:

Make a custom pipeline processor inheriting the OOTB processor for the step of concern

And then, on the pipeline step of concern, point to the custom processor instead of OOTB one. Something like below:

Swap processor type to custom one created on the solution

I was not confident it would work, but, tried my luck and performance was much better afterward. It was averaging between 3 to 6 seconds and later submissions were even quicker. This is a huge improvement from where things were, so, I am one completely happy person after pushing this fix live. 🙂

Do note that for this pipeline to fire, on first pipeline step based on template below, ensure the Minimum batch size is set higher than the current scenario. In our case, the queue size will be ‘1’ always because we are firing the pipeline on single submission. In my case, as screenshot shows below, I set that to ‘2’. Only then our custom processor will run, if you are curious decompile the code to see why I said this.


/sitecore/templates/Data Exchange/Providers/Salesforce/Pipeline Steps/Create Object Queue Pipeline Step
Showing minimum batch size recommended for form submission use case. Value dependent on your scenario.

Now, I can relax and enjoy the successful launch! Also, I can recommend Tenant Service and CRM Connector with more confidence to scenarios that need them.

Ship Sitecore Contacts as Custom Salesforce Objects

NewCustomObject

Hello! If you have not read my first post in regards to challenging new proof of concept we did in order to confirm if we can use Sitecore CRM connector given our requirements, please give that a read here. In this blog post, I will cover first two goals I had on mind to test the waters and see if connector can be leveraged or if we need to switch our game plan. Short answer is, Yes, I was successful and though we had few gaps that needs addressing if we roll forward, it felt doable given the initial findings.

Let us look more deeper as to how I configured Sitecore CRM connector to be able to ship Contacts as custom Salesforce objects. In our case, they were custom Salesforce objects, but, the same steps could be applied to map Sitecore Contact information to any defined object on Salesforce. Below are the steps in order you would need to do in order to ship Sitecore Contacts to Salesforce as specific objects.

  • First things first, define value accessors for your Salesforce object and ensure to add proper API field name or any other settings that are needed for every field in question. See a quick screenshot that should help understand this step better
Screenshot depicting value accessors for custom object in Salesforce
  • Ensure you have value accessors on Sitecore Contact as well if say some custom facet model and properties were defined on your Sitecore Contact. This has been done many times in the past and I have blog posts covering this topic if you like to give that a read. Check out these posts.
  • Now that you know what data to read and what exactly it maps to both objects in question. Let us define the pipeline steps. It is always best to duplicate default one that comes OOTB when you install the CRM connector. It would be this one in the path below, duplicate and call it your own.
    /sitecore/system/Data Exchange/Landrys Salesforce Tenant/Pipelines/xConnect Contacts and Interactions to Salesforce Sync
  • Alright, now below specific steps would need some edits to accomplish what we are after.
  • Go to your duplicated pipeline step called ‘Fake Resolve Salesforce Contact’ which should be inside pipeline ‘Process Single Contact from xConnect pipeline. Ensure to give proper name for object and pick your shiny new value accessors defined for Salesforce object. It would look something like below.
depicts pipeline change to read new object fields and object name should be API name for the object of concern in Salesforce.
  • Create new value mapping set that connects which field from Sitecore Contact should map to which field on Salesforce custom object. Once you have that sorted ensure the step called ‘Apply Mapping from Contact Model to Salesforce Feedback” inside of ‘Process Single Contact from xConnect Pipleline’ has the correct mapping set picked.
  • Now, go to pipeline step ‘Add Salesforce Object to Queue Pipeline Step’ and ensure ObjectName is correctly noted on Queue Settings. You may decide to leave this as default, but, it is better to always have different queue names to resonate what truly is present on the queue. It should then look like below.
  • Last, but, important. If you do not do this, only batch size defined number of contacts will be shipped to Salesforce due to bug as of this writing on latest version of Connector. Note the case, it should be completely lower case on highlighted. Go to pipeline step ‘Submit Remaining Contacts In Salesforce Object Queue’ inside of pipeline ‘Read Contacts from xConnect Pipeline’ and ensure ‘Object Name’ field has right settings.
Depicts object name in lower case to avoid missing contacts on Salesforce

And that is pretty much it, if all goes well if you go to a tab that displays custom objects on Salesforce, you should now see created objects of type you wish for on Salesforce.

screenshot depicting custom objects created

Now, if you have custom facets and fields that would need mapping, it is super important to ensure to load corresponding facets on step ‘Read Contacts from xConnect’ step inside ‘Read Contacts from xConnect Pipeline’

Depicts custom facet that is picked while loading xConnect Contact

Also, it is important to have your custom collection model loaded on end points if you have one. I think I covered some of this in my previous blogs as well. So, you can refresh those up here if needed.

So far so good, I see what I need on Salesforce, but, the solution of stowing away all that information on Custom Facets on xDB contact record seems like overkill and messy. In our case, our goal was to send every single form submission of a Contact on to Salesforce. To do this in the path suggested above which is mapping Sitecore Contact to Salesforce object, we have to store literally every single form submission information on xDB Contact record though doable seemed pointless. We do not see value in what we are storing as we may only personalize on latest form submission and rest of the data is waste of memory and could quickly grow our xDB indexes.

I am all about efficient solution and this did not seem like one. Something caught my eye on Sitecore downloads link -> Tenant plugin for Sitecore CRM Connector.

What does Salesforce CRM plugin for Tenant Service do?

My mind started wondering, what if I can use the connector to push all the information user entered on forms using Connector, but, with out having to save all of it on Contact record. That would be amazing, right?

In my next blog post, I will talk about how far we got on using Tenant plugin and DEF framework submit action to actually do what we had on mind to align with win win solution.

Sitecore Connect to Salesforce CRM – A New Take!

Proof of Concept

Almost an year ago, I embarked on a journey to explore Sitecore Connect to Salesforce CRM connector. Almost instantaneously, in my first week, one of the initial questions I had was , can I map Sitecore Contacts to some other objects in Salesforce, such as Leads, for example? Upon researching, I figured there is no OOTB way to map to something else other than Salesforce Contact. Back then, the client was okay for us to push Sitecore Contacts as Salesforce Contacts and have a special view in Salesforce to ensure Sitecore pushed Contacts are in their own special bubble inside of Salesforce Contacts, so, their end users on Salesforce view does not get cluttered with Sitecore Contacts. So, that door of if we can push Sitecore Contacts as different Salesforce objects was left unopened.

End of last year, same question came our way during discovery with another client and this time around the client’s data architecture was much more complicated and can not abide to simple workarounds, they wanted us to be able to ship form submission data as custom object/s to Salesforce. This though doable in theory was never tried or never really documented as a use case in Sitecore world. So, the only way to prove the theory is to actually, well, to do it.

We convinced the client to allow us to roll with POC and my gut was I could pull this off based on all the learning I did so far on the connector. Check out my experience with Sitecore connect with Salesforce CRM blog series and my VDD session on the same.

I was happy and nervous to take up this challenge couple weeks ago. I know there is actually nothing out there that can help me once I go in that path, so, though it was exciting that I will be the potential first one doing this on record, but, at the same time, I could not stop thinking of all the walls that would come by way. 🙂

In coming blogs, I will share the experience with you all what my strategy was, what problems I solved, milestones, happy moments and couple bugs I excavated while I was at it. Let’s get started, to begin with, on such type of projects, I like to jot down my thoughts and an outline that will help me stay focused, but, also helps to look back on steps if need be. It is always proven to be my best friend, staying organized.

Here are the goals for POC I had on my mind –

  • Prove that Sitecore Connect for Salesforce CRM can ship Sitecore Contact Information as a Custom object/s on Salesforce.
  • Configure Connector to map Custom Facets on Sitecore Contact and send those Custom Facets as Custom objects to Salesforce. Meaning, no longer Contact to Contact mapping on two systems, it would be completely custom.
  • Explore ‘Salesforce CRM Plugin for Tenant Service’ option noted on Download section here and see what it has to offer. There was not much documentation on this one, so, it is a bit of suspense, but, again my sixth sense was hinting me that it might prove to be useful.

I got these goals jotted in my document and in my heart and kept moving through milestones. As I passed through every single milestone, I got more and more excited to reach to the end of POC. I will take you through how I achieved each of these goals and hopefully it will help you achieve something similar in future.

Experience profile search not working

Do you remember an issue that took almost a month to resolve which constantly was in the back of you head bothering you and reminding you that it is still pending? This case is one such incident.

Here it goes the whole deal ! I appreciate Sitecore support team’s resilience and how they had been proactive in responding. But, I also feel that this should be like known or documented because guess what who ever used xDB before knows there could be challenges and they have to solved quickly to mitigate data latency issues.

Problem: Experience Editor search was not working on older Contacts as PII indexing was not enabled and later enabled.

Story: Human error, it happens even after you have documented painfully every single step that need to be done manually after a deployment. Unfortunately config edits on Processing server or Search Indexer side of things are manual deployments on our end at this time. Anything we push to CM/CD roles is absolutely one click deployments using patches as a standard. So, I missed doing this step noted here during production deployment. I realized the problem noted above could be because of that and changed it to ‘true’ on applicable roles/locations on Managed Cloud and hoped for the best. When I did this search did start to work, but, was only working on newer Contacts that got added to xDB post the change I had made. Why?

Resolution: I had this issue before when I first started playing with xDB, Experience profile and various other xConnect related services. I was pretty confident I need to rebuild xDB Index that is usually answer for such funky behaviors. I did that by doing the below steps:

  • Open up Azure portal and open up resource group of concern
  • Go to specific resource for xc-search app service
  • Open up Advanced Tools and hit Go -> this will take you to Kudu
  • Selected Powershell option -> Drilled down all the way in to Index Worker location and followed steps noted here
  • It said the rebuild succeeded, but, nope issue is not resolved and still search does not work on older contacts.

I was not sure on what else to do, I logged a support ticket with Sitecore and tried explaining what I tried so far. They confirmed that rebuild is what I should be doing in order to resolve the issue on hand. Strange, but, I did that exactly what is written on documentation. It turns out, rebuild has to be run subtly differently and in different location when the matter is about Managed Cloud. Support suggested that I do the rebuild command on location below such as below –

D:\local\Temp\jobs\continuous\IndexWorker\<Random Web Job Name>

Command is also subtly different from documentation it should be .\Sitecore.XConnectSearchIndexer.exe -rr

And then I started patiently monitoring the rebuild process using instructions here https://doc.sitecore.com/developers/100/sitecore-experience-platform/en/monitor-the-index-rebuild-process.html

You can also monitor the same from SOLR following below steps that were shared by Sitecore support :

  • Go to the admin panel
  • Switch to the xdb collection and click “query”
  • Then run the query: id:”xdb-rebuild-status”
  • This will tell us the exact current rebuild status of your xdb index.

Yep, I did all of that and every single time I tried it was stuck at 95% in finishing state, it never processed completely. So, Sitecore support asked me to do lot of steps to debug further to enhance the logging and log more things to index worker logs to help us understand why it is stuck and not completing. They identified the issue to be a setting that is higher than default one. The funny thing is, we did not set these settings up, they came with Managed Cloud. Any way below is the setting that I had to swap

Go to location below on Kudu: “D:\home\site\wwwroot\App_Data\jobs\continuous\IndexWorker\App_Data\config\sitecore\SearchIndexer\sc.Xdb.Collection.IndexWriter.SOLR.xml” )

I could see the config below:
<Solr.SolrWriterSettings>
<Type>Sitecore.Xdb.Collection.Search.Solr.SolrWriterSettings, Sitecore.Xdb.Collection.Search.Solr</Type>
<LifeTime>Singleton</LifeTime>
<Options>
<ConnectionStringName>solrCore</ConnectionStringName>
<RequireHttps>true</RequireHttps>
<MaximumUpdateBatchSize>1000</MaximumUpdateBatchSize>
<MaximumDeleteBatchSize>1000</MaximumDeleteBatchSize>
<MaximumCommitMilliseconds>600000</MaximumCommitMilliseconds>
<ParallelizationDegree>4</ParallelizationDegree>
<MaximumRetryDelayMilliseconds>5000</MaximumRetryDelayMilliseconds>
<RetryCount>5</RetryCount>
<Encoding>utf-8</Encoding>
</Options>
</Solr.SolrWriterSettings>

“MaximumCommitMilliseconds” value was too high and it was recommended to change it to “1000” . This apparently was the default value. Suspicion is when best practices were followed according to KB article by Managed Cloud team they must have swapped it as part of default set up steps.

I did the above change and queued rebuild process again for nth time and monitored patiently. Worked!!! I almost cried out loud given I had to do these steps so many steps, waited so many days to get the issue resolved.

Hoping Sitecore team can update documentation around this specifically for Managed cloud, until then, hope this helps some one else loosing sleep over similar problem and needs to queue in that rebuild index.

Know your responsibility with Sitecore Managed Cloud

I have worked with several different Sitecore implementations, each one is unique in it’s own way. Teams are different, responsibilities are different, people – definitely different 😉 Managed cloud is also subtly different when it comes to few mundane chores that you may have done differently in the past. With managed cloud, it is a partnership between Clients, Implementation partners and yes Sitecore Managed Cloud team. So, we got to keep each other posted, it is definitely challenging and need to be well planned out, especially on tight timelines like in our case.

I must have logged around 50 tickets as of my last count when I combine traditional Sitecore tickets and Managed Cloud tickets. In retrospect, though it feels like I spammed Sitecore, some of the questions were purely understanding the responsibility and where to draw the lines between all the teams involved. Documentation does not call this out unfortunately! Also, typical circle back time noted on these type of support tickets is 3 days which a great percentage for a one sprint go live project like ours. lol So, the effort from my end was to log ticket and keep working on the side to figure it out as we wait for Sitecore team to respond.

I will start with easy ones first. So, as we started building more confidence from code and content perspective, especially when we were ready to hand off our features for content load, it was super important to ensure restore is turned on and is per our requirements in terms of backup frequency and retention based on environment in question. These are handled by Sitecore Managed Cloud team via service cloud request creation.

Backup Schedule for Azure App Services

Use this request to back up your app services, basically code/configs and pretty much file system. I ensured I requested this for example on production, daily as back up frequency and week for retention. You can relax the requirement if its an environment that is less of a concern.

Backup Schedule for Azure SQL databases

Use this request to back up your databases, your content and anything in Sitecore basically. For our production instance, I requested backup for core, master and web with backup frequency daily and retention for 4 weeks. Usually, this type of retention should be enough for most of the solutions.

Quick Tip : As a perfectionist (at least I would like to call myself that lol), I wanted to ensure all health checks noted here pass on all my CM and CD(web roles) instances per documentation out there, but, it turns out that it is okay for /healthz/ready to fail and it does not mean anything. So, if you see your instance giving “Unhealthy” status on these url’s, do not fret, you did nothing wrong as long as /healthz/live is good on CM and CD. It turns out /healthz/ready check is more for other roles such as processing or reporting for instance. Documentation could be out of date, but, hope this saves you some time.

Need a custom domain to ensure your url’s on lower environments and production are end user friendly and not hash looking ones that come with initial set up? Sitecore Managed Cloud team does not own this, its you and probably client team depending on the specific scenario. Below links might help:

https://docs.microsoft.com/en-us/azure/static-web-apps/custom-domain

https://docs.microsoft.com/en-us/azure/app-service/configure-ssl-bindings

SOLR

Can not answer this with confidence. But, Sitecore team said different things in regards to this on different tickets. But, one thing I clearly remember was when I debugging an issue which was related to indexes, I was very tempted to kick the SOLR and restart it. Typically, if your team owns infrastructure, you can login on Putty or Powershell and run some quick commands. Luckily, issue got resolved and I did not need to perform this step, but, Sitecore team did mention, I had to create a ticket if I need to do this. Also, Sitecore team sets SOLR up when they set up brand new resource group per contract, so, I guess it’s mostly Sitecore that owns SOLR. Only access we had was admin console of SOLR and to be honest, there are so many things you can’t do using the console. Need more command power when it comes to SOLR.

Security Hardening, Session Set up and Tuning

Wondering who would do the whole nine yards – performance tuning, hardening, session state, etc., etc., I did too. When I reached out to Sitecore, they said that is going to be me. I was bummed, why doesn’t this come like already set up was my first thought.

But, since I got the confirmation, I started full speed only to find out, the hard way that session setup was already done by Sitecore teams lol. Just so you do not waste your time understanding behind the scenes what are the correct decisions for out of proc session and so on. By default, CD roles are set to use out of proc session state with Redis as provider. So, no action was needed by me to set anything related to session. Crossed that off my list.

Now coming to security hardening, I noted that some were already in place and done by Managed cloud. How ever most were not done, so, I encourage the regular drill of going over the stuff we typically do noted here for experience platform security chores. I have some detailed notes with me if some one is curious, reach out!

Almost forgot, for tuning none of the old rules apply for Azure SQL. So, it is important to give these links below a read and do what you can:

https://doc.sitecore.com/developers/100/sitecore-experience-manager/en/monitoring-and-maintaining-azure-sql.html

https://kb.sitecore.net/articles/909298

I could actually do a bunch mentioned up on the links above and some are actually again already done. 🙂

I remember also having to do this one – https://kb.sitecore.net/articles/858026 to ensure I overcome a pesky error such as below that kept creeping in my logs. After I did change the value to suggested error did go away.

Whitelist Authoring

This is actually related to some of the steps we need to do on security hardening when authoring is exposed publicly. If authoring is not exposed out there, you can relax some of those requirements, again Sitecore recommends we do those steps regardless, so, I do not want to give incorrect recommendation. But, having only finite set of people get to authoring is much secure anyway. To do this, I read documentation out there, but, was mostly confused, Sitecore Cloud team was sweet enough to actually do this chore for us. Once they actually did it, I figured it was not as complicated as it sounded on documentation. Basically run a PowerShell script it is on documentation to fetch what authoring would need to talk to and whitelist all of them as outbound addresses on CM app service. Plus, they actually added outbound IP addresses of reporting in SQL server firewall. I would still think its smart to ask Sitecore team to do this. You are good with this set up until your app service is not scaled up or down or change the pricing tier in which case. So far, it’s been a month and we see no issues with initial set up done.

Another important thing to note, we have to add authoring IP also on outbound addresses on CM app service, this is because we noted that a request back to authoring from authoring server was not being processed and blocked with forbidden message lol. To avoid this, I had to add couple more outbound IP addresses to list already existing, it was not fun because I had to manually add a bunch more.

Debugging Tricks

Last section of this blog, but, definitely most important one. Minus this knowledge, I felt crippled. It was super hard to keep track of logs using Kudu, timestamps are not right and you never know if you are looking at correct log file. So, I recommend even if it takes couple extra minutes to review latest logs at all times follow steps here

I was so much more confident once I got the handle over latest logs in terms of debugging issues. Also, make sure your app settings are correct so Application Insights always receives the logs properly.

Alright this is it for now! Next up, I will sum up how Sitecore helped me solve an issue in regards to Index worker. Took us a good month or so, but, hey, it is now fixed. This is also very much so related to managed cloud, but, deserves a special blog entry. 🙂

Get your work on to Managed Cloud

release pipeline

If you have not checked out the beginning of my journey with Sitecore Managed Cloud. Please start there to truly understand the basics. Now, if you already know your way around azure and your team has started the real work, you must be real itchy to get some code up there to your azure app services. Let’s roll up our sleeves and just do that.

First and foremost understand the app settings. With managed cloud, it is even more so important to ensure your patch config for settings has the right and expected values to not throw off things you would hardly imagine. Each one has significance and should be paid attention to. For instance, I had incorrect value placed for ‘role:define’ on my CM and it threw off indexing strategies, it was painful to debug. So, do pay attention to all the values. I am putting some samples and some guidelines, but, it all depends again on what your instance need to breathe and do so your right way.

SettingValue
role:defineAny of the below depending on role of the server:
ContentDelivery
ContentManagement
ContentDelivery, Indexing
ContentManagement, Indexing
Processing
Reporting
Standalone
Note – Though ContentManagement is a right value. Depending on your infrastructure set up, you may want to choose ContentManagement, Indexing
env:defineThis is up to you really – depending on how your patch file usage. In my case, I like using them with {envnamerolename}.
Example: PRODCD, UATCM etc.,
search:defineAny of the below:
Solr
Azure
In my case it was Solr
messagingTransport:defineAzureServiceBus
Note – I took a peek at what Sitecore set up for us on this on Managed cloud and ensured we use the same. In UAT, I saw some issues on Marketing automation because I had this value incorrectly set up.
AllowInvalidClientCertificatesYou would think this should be false. But, for some reason it threw off lot of things and Sitecore Managed Cloud team had set this up to be true, so, I left it as ‘True’
useApplicationInsights:defineTrue
with out this logging will not work properly. As you will realize on managed cloud, all logs tie back to Application Insights. So, it is super duper important to have this flag set correctly.
exmEnabled:defineno
Turn this off if not using it. You can always turn this back up when you do choose to use EXM.
aspnet:RequestQueueLimitPerSession25
This value may need to be tuned if you see errors on your logs constantly. Good article
Important application settings to pay attention to

Next step is to set up TDS Web Deploy to ship your code and items to your shiny new Sitecore Managed Cloud instances. You can set this up to be manual or trigger automagically when some one merges to specific branch on Azure Devops. In our case, we had lot of hands at a single given point of time due to super tight timeline, so, we kept this to be manual to ensure we know to make choices depending on the impact. Below are two links that really helped me get through this –

Step 1: Set up TDS project for Web Deploy. Follow the steps here, last step is optional and I do not remember doing that bit. You are good to move to step #2 as long as you see Web Deploy package and PS script in your TDS project bin folder. https://www.teamdevelopmentforsitecore.com/Blog/TDS-5-8-Local-Set-Up

Quick screenshot of where the web deploy output would go on building your solution

Step 2 – Set up and configure Azure build pipeline and release pipeline. Steps noted here are pretty thorough to get you started

https://www.teamdevelopmentforsitecore.com/Blog/TDS-5-8-Configuring-Azure-Build-Pipeline
https://www.teamdevelopmentforsitecore.com/Blog/TDS-5-8-Configuring-Release-Pipelines

Only catch is in release pipeline set up, you will need Azure subscription. This step is a must to connect to your app service of concern and actually deploy stuff. To get this, you have to do couple things in order.

Log a cloud support request titled – “Get Access to Azure – Service Principal”. Give details in regards to your resource group and this type of ticket is usually completed within minutes and you will get information such as below on your ticket.

Note this and keep this handy for next step

Go to azure devops project and click on Project Settings down below on the left. It is hard to find, so, adding a screenshot.

Project Settings

Next click on Service Connections on left menu, you should now see an option to add a new one. Select Azure Resource Manager and pick Service Principal (manual option) and load it up with all the proper values from support ticket resolution comment. Below is a quick table of mapping that could help you.

Azure Service principleValue/Sitecore Support ticket labelNotes
EnvironmentAzure CloudShould be picked by default
Scope LevelSubscriptionShould be picked by default
Subscription IdSubscription IdYou can also get that from Publish Settings File – Subscription Id
Subscription Namesee notesYou can also get that from Publish Settings File – Subscription Name
Service Principal IdApplication(client)IdThis will be on the ticket – get from label titled as noted.
CredentialService principle keyshould be left option and should be default
Service Principal KeyService Principle Password/API KeyThis will be on the ticket – get from label titled as noted.
Tenant IDTenant IDThis will be on the ticket – get from label titled as noted.
Give it a meaningful Service connection name – I like giving {project name- instance type – last 5 of resource group}

Refresh your azure subscription section on release pipeline configuration and you should now see the new option. Select it and now you should see familiar options of App Services on your resource group under App Service Name field. Select that as well and you are pretty much golden. Follow rest of the steps on links noted above and test your build and release pipelines.

If you were lucky like me and you did all your steps right, you should now see your sweat and blood on Sitecore Managed Cloud instance.

App service name and Azure subscription settings on Release configuration

Next up, we will see some ways to protect your code and items, boundaries as to what are you responsible for and what is Sitecore responsible for, debugging tricks. Stay tuned!

Sitecore Managed Cloud – The Beginning

New Beginning

Towards end of last year, I got an opportunity to lead a project which has to go live in a sprint. Basically, a sprint to architect, develop, test and go live. It was something I thought was impossible even if it is MVP version. But, I took it as a challenge and wanted to see how far I can go along side of my team. We had a real steep learning curve here, below are some of them and of course Managed Cloud stood out because this is the first time I was owning the implementation and deployment on such infrastructure.

  • Sitecore 10
  • Brand new Zodiac/Constellation foundation solution
  • Development team who is new to Experience first paradigm
  • Sitecore Managed Cloud
  • Many many more first time things from front end development team as well

When I am given something new to own, learn and deliver. My first thought is to understand the basics no matter how many hours I have to spend on it. With out this, I have found myself struggle, unable to break walls and move/steer the team and myself with confidence in the past. So, I never do it any other way now, I started there and slowly, but, surely I could see myself gaining momentum.

First things First….

Understanding Terminology and Mapping backwards to knowledge I have based on previous experiences. Below are the steps and notes on every first step I took to get us to where we stand right now. It would really help you if this is your first time like me and you do not have direct mentor who you can reach to internally in your company.

Get in there!

To get access to Azure portal, you first need access to Sitecore support portal “Create Service Requests” section. You should reach to your Sitecore direct contact and request them to get you that. Once, you have that, you should see below section on your sitecore support login. More details on the process on this article.

second section that will appear once you have permission to the cloud organization

You should now be able to create service request to get access to resource groups available on the account. You can read more details about this request on Sitecore’s KB article.
This ticket should be processed mostly instantly and now if you login on your Azure portal, you should see Sitecore in your subscriptions

Sitecore subscription now visible on Azure login

Azure Resource Groups

Alright, you are in. First step is sorted. Now what? If you are like me first time inside Paas based Azure, you are probably lost and there is so much documentation out there, but, sometimes too much of it could be confusing. So, here is what you got to do and understand. Every environment that was created on your subscription, typically one for QA, staging/UAT and production is called ‘Resource Group’. If you have sufficient permissions, you can actually create a service request to set up a new resource group for your needs, this request is called ‘Create New Environment‘. Of course, there could be more created or need to be created based on your contract with Sitecore. We had three of them for this specific client. You can access resource groups from your azure portal by clicking on below.

Ensure you have selected Sitecore subscription before clicking this.
Three resource groups one for each environment on contract

Understand what you see

Now, every resource group would have several resources of several types in them. The first time I saw this, I was like – what are all these? I had no idea and I really wanted to understand these before I go to any further steps. So, I created a mapping of every resource to what it means and what is it responsible for.

view of resource group resources. As you can see there would be around 44 of them of different types

I like to build on what I am familiar with, so, I started here, an article that explains what are the different roles on Sitecore architecture. Pay real attention to ‘scaled XP’. Now, all I had to do is to map everything I see on resources back to that diagram. Visual always helps! This understanding is really important to ensure you know what to expect, how to debug and beyond. I took pain of actually mapping each resource and adding a line or two of what it means. I am sharing it in hopes of helping some one else. Check it out here

Next up is to know how to deploy your code and items.

Deploy your custom facet on Sitecore Managed Cloud

We have worked extensively in past couple projects on how to extend xConnect facet, add fun properties to it as needed and tuck it away nicely for later use on all touch down points. Now, this whole process has become second nature. I still remember my first trial at this and I missed various things which wasted more than half a day. 🙂 I do not do rookie mistakes any more. lol

So, traditionally if you need an xConnect facet, you do most common steps on your local or a simple upper environment. This is well documented every where, so, I will not repeat myself on what this process is. Here is one of my blog posts you can refer back to.

Now, what if your upper environment is Sitecore Managed Cloud? Steps are going to be different in this case. So, it is important we know the drill when this is the case:

This link (Sitecore Documentation) helped me quite a bit actually. I would recommend referring this first and then follow the steps below I have as well as additional reference to get you rolling with custom xConnect data.

Steps to actually get it to work:

  1. Deploy your JSON model to all xConnect Collection roles on your resource group
  2. Also, ensure you deploy your JSON on Index worker web job which is usually under xc-search App Service role on your resource group
  3. Drop JSON also on Marketing ops App service.
  4. Ensure the dll that has your custom facet defined and config patch is deployed to all core roles – CM, CD and Processing role App services.
  5. Restart all the xc roles where you dropped JSON and also all core roles to ensure they are reading from all the right places
  6. I would also recommend stopping and starting web job if you are not confident it did not restart when you restarted the app service (xc-search)

And that is it, it should all work fine now. You should be able to ship the information to xConnect extended facet fields as needed. Hope this helps some one who is unsure how to deploy custom extended facet to Managed Cloud!

unveiled a Sitecore connect to Salesforce bug

bug

It is always a great feeling when you did not mess up or cause an issue and it was parceled along with the installation you had done. Can not explain it, but, hey still it was not me. 🙂

So, I bumped in to this issue while doing super thorough tests on new facet fields I just added to my facet and did an effort to push that additional information when available all through the town to Salesforce CRM and to SFMC. While running this tests, I wanted to quickly ensure a Salesforce contact is flagged as “Updated”, so, the next connector run will try and update the record on Sitecore. In effort to flag that I cleared a field that has data in it.

The connector ran as expected, picked up contact as I just edited, but, I did not see field get updated on xConnect. It still had old CRM value and was not updated to Null/Empty. Which is super weird and not expected. At the end of the day, the role of connector is to keep data in sync between platforms. So, empty data should map to empty data and not outdated old data from CRM. This to me is a data integrity issue if it really matters to clients to keep them the contacts truly in sync. So, I loaded a support ticket just to confirm if this is connector current behavior or did I do a grave mistake some where in my setup or configuration or mapping.

Today, Sitecore support confirmed that this indeed is a bug and created a bug on their system. I do not know why, but, I feel like I accomplished some thing by uncovering this pesky data issue. lol Here is the ticket number 441899 and I am all excited to track this probably in couple next releases – KB853187 and KB951718.

Build Error on TDS Squashed

similar

Remember the feeling when you know you saw a similar error before, but, can not recollect what was done to resolve that? I spent more than half my day battling something very similar that I had seen before, but, no matter what I did from re-collection it did not work. Let me take you through what happened.

Alright, I am trying to set up Azure pipeline to use TDS Web Deploy for very first time. Already first time things, I am usually extra cautious, skeptical and may be not as confident as normal. To top that off, when I struggle to solve something very familiar, it does something to me, I feel like *****. I was burnt out by the end of it, but, good news is that this issue is squashed and I am on to next step.

Okay, here is the story! I was being a good girl and following the steps noted here to get Web Deploy TDS done right via Azure Devops to build and generate artifacts from solution hosted on Azure Devops. I thought I was doing well, until the install of Hedgehog TDS Development Nuget package. So, I opened up Package Manager Console -> selected source as nuget.org and then selected the project to which I need to install the package. It did its thing and seemed alright. For my sanity, I wanted to rebuild the project to which I had done changes and boom, I get a huuuuuge error message which is the error you might have seen at some point of time working with TDS. Full error block below:

The "VerifyTDSVersion" task failed unexpectedly.
System.IO.FileLoadException: API restriction: The assembly 'file:///C:\Users\deepthi.katta\source\repos\project\packages\HedgehogDevelopment.TDS.6.0.0.18\build\HedgehogDevelopment.SitecoreProject.Tasks.dll' has already loaded from a different location. It cannot be loaded from a new location within the same appdomain.
   at System.Reflection.RuntimeAssembly._nLoad(AssemblyName fileName, String codeBase, Evidence assemblySecurity, RuntimeAssembly locationHint, StackCrawlMark& stackMark, IntPtr pPrivHostBinder, Boolean throwOnFileNotFound, Boolean forIntrospection, Boolean suppressSecurityChecks)
   at System.Reflection.RuntimeAssembly.nLoad(AssemblyName fileName, String codeBase, Evidence assemblySecurity, RuntimeAssembly locationHint, StackCrawlMark& stackMark, IntPtr pPrivHostBinder, Boolean throwOnFileNotFound, Boolean forIntrospection, Boolean suppressSecurityChecks)
   at System.Reflection.RuntimeAssembly.InternalLoadAssemblyName(AssemblyName assemblyRef, Evidence assemblySecurity, RuntimeAssembly reqAssembly, StackCrawlMark& stackMark, IntPtr pPrivHostBinder, Boolean throwOnFileNotFound, Boolean forIntrospection, Boolean suppressSecurityChecks)
   at System.Reflection.RuntimeAssembly.InternalLoadFrom(String assemblyFile, Evidence securityEvidence, Byte[] hashValue, AssemblyHashAlgorithm hashAlgorithm, Boolean forIntrospection, Boolean suppressSecurityChecks, StackCrawlMark& stackMark)
   at System.Reflection.Assembly.ReflectionOnlyLoadFrom(String assemblyFile)
   at HedgehogDevelopment.SitecoreProject.Tasks.VerifyTDSVersion.Execute()
   at Microsoft.Build.BackEnd.TaskExecutionHost.Microsoft.Build.BackEnd.ITaskExecutionHost.Execute()
   at Microsoft.Build.BackEnd.TaskBuilder.<ExecuteInstantiatedTask>d__26.MoveNext()	Website.Deploy.TDS	C:\Users\deepthi.katta\source\repos\project\packages\HedgehogDevelopment.TDS.6.0.0.18\build\HedgehogDevelopment.TDS.targets	187	

I can bet my life, I have seen this before. So, I tried re-collecting what I did and also looked up online to find out what I can do to remove this error:

  1. Reloaded the project of concern after unloading it .
  2. Reloaded whole visual studio thinking that would do the magic.
  3. Ensured I matched the Nuget package version to TDS I have on my local. I faintly remembered that could also cause similar error due to conflicting versions.
  4. Removed a new line that got added to project after I installed the package which is below. I know I should not, but, wanted to try it. This worked!!
<Import Project="..\packages\HedgehogDevelopment.TDS.6.0.0.18\build\HedgehogDevelopment.TDS.targets" Condition="Exists('..\packages\HedgehogDevelopment.TDS.6.0.0.18\build\HedgehogDevelopment.TDS.targets')" />

But, I can not remove it though, build on Azure will yell at me and I actually tried it and it literally did yell at me. So, even if the above worked, I can not go with that. I looked at something else on *.csproj file that caught my attention. See below:

<Import Project="$(MSBuildExtensionsPath)\HedgehogDevelopment\SitecoreProject\v9.0\HedgehogDevelopment.SitecoreProject.targets" Condition="Exists('$(MSBuildExtensionsPath)\HedgehogDevelopment\SitecoreProject\v9.0\HedgehogDevelopment.SitecoreProject.targets')" />

I went to the local path and could actually see the HedgehogDevelopment.SitecoreProject.Tasks.dll located in the path noted on import. A quick screenshot below:

This gave me nice clue in right direction. I quickly gathered my patience and did the following in order. Do note that you have to start below on singleton TDS project which does not have internal references to other projects, that should be saved for last. I applied my theory on bundle TDS project and lost hope at first. But, something told me I need to try it again on simple TDS project.

  1. Installed Nuget package via running below command on all TDS projects I have on solution
    Install-Package HedgehogDevelopment.TDS -Version 6.0.0.18
  2. Removed existing import that was loading location above for HedgehogDevelopment.SitecoreProject.targets, I would not need that now that I have all those dll’s loading from Nuget.
  3. Note that #1 and #2 need to be done in specific order. Otherwise, you will get other pesky errors as TDS is thrown off if you remove references before adding one more.
  4. Reload whole Visual studio. Do note that simply reload of project would not help and you will have same error again and again. In my case, I knew I was on right path, but, it did tire me since I tried building as soon as I did above steps.
  5. Rebuild solution and this time you should have successful build if all is well.

After I checked in the above changes, my azure pipeline was a happy camper and was able to generate the artifacts I was after. Yayyyyy!! Moving to next step and I am sure I will face more roadblocks and have to tear them apart. But, this made me real happy on a Friday!!!!

Hope this helps some one save their half day or more!