Sitecore Session Data Mystery

mystery

Hello! Every now and then, a JIRA ticket comes in form of learning angel. This bug was exactly that, it needed detective work, history exploration, research and that very special aha moment. I have love hate relationship with such pesky bugs. If I am successful, I get the bragging rights and if I am not I am signing up for sleepless nights. Luckily, this case was a successful one. 🙂

Here it goes. It started with the client email suggesting they are seeing drop in specific values of form submission data starting end of last year.

First step: Replication -> Nope, could not replicate this loss of data on dev, QA or UAT.

Second step: Data Analysis -> Asked client to provide the data from end of last year to now to see if I can find a pattern. Bingo – Yes, all the data complained about missing belonged to session. Okay, some direction.

Third step: Questions/All sort of them -> Why are session values missing? Actually, when does session values go missing. Answers below :

  1. Session was never stored properly to begin with, so, when tried to access no values were present.
  2. Session was stored properly, but, when tried to access it was empty. This usually happens when session is expired.

Fourth step: Debug -> Add tons of logs and see what is going on with every step. Session was always loaded fine when business requirement was met, so, ignore reason #1 noted above. When debugging, most of the time, session access was also good. But, weirdly it is missing in some cases. Bingo -> Replication is now successful. Only took me to do at least 15 trials before I hit the gold.

Fifth step: Look deeply on those scenarios of replication. Looked enough, spent almost half day, had some food to think better. But, nope, I am unable to see a reason. Failed!

Sixth step: Logs are your best friend! Took a deep dive on logs, checked all errors and warnings. Seemed harmless. Oh wait, what is this? A 404 on VisitorIdentificationCSS.aspx? oh wait, there is one more 404 on VIChecker.aspx. Could this do something to session? I found some stack overflow post (unable to find the link now) that suggests session does expire when visitor identification fails under 1 minute because Sitecore will think the end user is a bot based on default configuration here.

Seventh Step: Celebrate, but, with caution. First thing first, I ensured 404 is taken care of, it was due to a global IIS rewrite rule that was removing aspx extension on every incoming request. Poor visitor identification! This also sounds alarms as to why Sitecore still uses aspx stuff? we should truly make Sitecore internal stuff MVC style by default. Seems pretty outdated at the moment. Anyway, once 404’s are fixed and requests for visitor identification were happy. I was confident that my theory is correct, but, I have to prove it before hand off to QA. I tried 20 attempts, each min accounted for one trial. On those 20 trials, I did not see even one missing Session value. Hence proved!

I was sooo happy that I wanted to open a wine bottle, but, it was not dinner time just yet, so, adjusted with ice cold nitro brew from Starbucks. Enjoyed some bragging rights by sharing the hypothesis with clients, internal arch team and who ever pinged me that day or hopped on a call had to hear partial story lol, poor them. Work from home downside to not be able to share with some one in real. I miss people. 🙁

Experience profile search not working

Do you remember an issue that took almost a month to resolve which constantly was in the back of you head bothering you and reminding you that it is still pending? This case is one such incident.

Here it goes the whole deal ! I appreciate Sitecore support team’s resilience and how they had been proactive in responding. But, I also feel that this should be like known or documented because guess what who ever used xDB before knows there could be challenges and they have to solved quickly to mitigate data latency issues.

Problem: Experience Editor search was not working on older Contacts as PII indexing was not enabled and later enabled.

Story: Human error, it happens even after you have documented painfully every single step that need to be done manually after a deployment. Unfortunately config edits on Processing server or Search Indexer side of things are manual deployments on our end at this time. Anything we push to CM/CD roles is absolutely one click deployments using patches as a standard. So, I missed doing this step noted here during production deployment. I realized the problem noted above could be because of that and changed it to ‘true’ on applicable roles/locations on Managed Cloud and hoped for the best. When I did this search did start to work, but, was only working on newer Contacts that got added to xDB post the change I had made. Why?

Resolution: I had this issue before when I first started playing with xDB, Experience profile and various other xConnect related services. I was pretty confident I need to rebuild xDB Index that is usually answer for such funky behaviors. I did that by doing the below steps:

  • Open up Azure portal and open up resource group of concern
  • Go to specific resource for xc-search app service
  • Open up Advanced Tools and hit Go -> this will take you to Kudu
  • Selected Powershell option -> Drilled down all the way in to Index Worker location and followed steps noted here
  • It said the rebuild succeeded, but, nope issue is not resolved and still search does not work on older contacts.

I was not sure on what else to do, I logged a support ticket with Sitecore and tried explaining what I tried so far. They confirmed that rebuild is what I should be doing in order to resolve the issue on hand. Strange, but, I did that exactly what is written on documentation. It turns out, rebuild has to be run subtly differently and in different location when the matter is about Managed Cloud. Support suggested that I do the rebuild command on location below such as below –

D:\local\Temp\jobs\continuous\IndexWorker\<Random Web Job Name>

Command is also subtly different from documentation it should be .\Sitecore.XConnectSearchIndexer.exe -rr

And then I started patiently monitoring the rebuild process using instructions here https://doc.sitecore.com/developers/100/sitecore-experience-platform/en/monitor-the-index-rebuild-process.html

You can also monitor the same from SOLR following below steps that were shared by Sitecore support :

  • Go to the admin panel
  • Switch to the xdb collection and click “query”
  • Then run the query: id:”xdb-rebuild-status”
  • This will tell us the exact current rebuild status of your xdb index.

Yep, I did all of that and every single time I tried it was stuck at 95% in finishing state, it never processed completely. So, Sitecore support asked me to do lot of steps to debug further to enhance the logging and log more things to index worker logs to help us understand why it is stuck and not completing. They identified the issue to be a setting that is higher than default one. The funny thing is, we did not set these settings up, they came with Managed Cloud. Any way below is the setting that I had to swap

Go to location below on Kudu: “D:\home\site\wwwroot\App_Data\jobs\continuous\IndexWorker\App_Data\config\sitecore\SearchIndexer\sc.Xdb.Collection.IndexWriter.SOLR.xml” )

I could see the config below:
<Solr.SolrWriterSettings>
<Type>Sitecore.Xdb.Collection.Search.Solr.SolrWriterSettings, Sitecore.Xdb.Collection.Search.Solr</Type>
<LifeTime>Singleton</LifeTime>
<Options>
<ConnectionStringName>solrCore</ConnectionStringName>
<RequireHttps>true</RequireHttps>
<MaximumUpdateBatchSize>1000</MaximumUpdateBatchSize>
<MaximumDeleteBatchSize>1000</MaximumDeleteBatchSize>
<MaximumCommitMilliseconds>600000</MaximumCommitMilliseconds>
<ParallelizationDegree>4</ParallelizationDegree>
<MaximumRetryDelayMilliseconds>5000</MaximumRetryDelayMilliseconds>
<RetryCount>5</RetryCount>
<Encoding>utf-8</Encoding>
</Options>
</Solr.SolrWriterSettings>

“MaximumCommitMilliseconds” value was too high and it was recommended to change it to “1000” . This apparently was the default value. Suspicion is when best practices were followed according to KB article by Managed Cloud team they must have swapped it as part of default set up steps.

I did the above change and queued rebuild process again for nth time and monitored patiently. Worked!!! I almost cried out loud given I had to do these steps so many steps, waited so many days to get the issue resolved.

Hoping Sitecore team can update documentation around this specifically for Managed cloud, until then, hope this helps some one else loosing sleep over similar problem and needs to queue in that rebuild index.

Know your responsibility with Sitecore Managed Cloud

I have worked with several different Sitecore implementations, each one is unique in it’s own way. Teams are different, responsibilities are different, people – definitely different 😉 Managed cloud is also subtly different when it comes to few mundane chores that you may have done differently in the past. With managed cloud, it is a partnership between Clients, Implementation partners and yes Sitecore Managed Cloud team. So, we got to keep each other posted, it is definitely challenging and need to be well planned out, especially on tight timelines like in our case.

I must have logged around 50 tickets as of my last count when I combine traditional Sitecore tickets and Managed Cloud tickets. In retrospect, though it feels like I spammed Sitecore, some of the questions were purely understanding the responsibility and where to draw the lines between all the teams involved. Documentation does not call this out unfortunately! Also, typical circle back time noted on these type of support tickets is 3 days which a great percentage for a one sprint go live project like ours. lol So, the effort from my end was to log ticket and keep working on the side to figure it out as we wait for Sitecore team to respond.

I will start with easy ones first. So, as we started building more confidence from code and content perspective, especially when we were ready to hand off our features for content load, it was super important to ensure restore is turned on and is per our requirements in terms of backup frequency and retention based on environment in question. These are handled by Sitecore Managed Cloud team via service cloud request creation.

Backup Schedule for Azure App Services

Use this request to back up your app services, basically code/configs and pretty much file system. I ensured I requested this for example on production, daily as back up frequency and week for retention. You can relax the requirement if its an environment that is less of a concern.

Backup Schedule for Azure SQL databases

Use this request to back up your databases, your content and anything in Sitecore basically. For our production instance, I requested backup for core, master and web with backup frequency daily and retention for 4 weeks. Usually, this type of retention should be enough for most of the solutions.

Quick Tip : As a perfectionist (at least I would like to call myself that lol), I wanted to ensure all health checks noted here pass on all my CM and CD(web roles) instances per documentation out there, but, it turns out that it is okay for /healthz/ready to fail and it does not mean anything. So, if you see your instance giving “Unhealthy” status on these url’s, do not fret, you did nothing wrong as long as /healthz/live is good on CM and CD. It turns out /healthz/ready check is more for other roles such as processing or reporting for instance. Documentation could be out of date, but, hope this saves you some time.

Need a custom domain to ensure your url’s on lower environments and production are end user friendly and not hash looking ones that come with initial set up? Sitecore Managed Cloud team does not own this, its you and probably client team depending on the specific scenario. Below links might help:

https://docs.microsoft.com/en-us/azure/static-web-apps/custom-domain

https://docs.microsoft.com/en-us/azure/app-service/configure-ssl-bindings

SOLR

Can not answer this with confidence. But, Sitecore team said different things in regards to this on different tickets. But, one thing I clearly remember was when I debugging an issue which was related to indexes, I was very tempted to kick the SOLR and restart it. Typically, if your team owns infrastructure, you can login on Putty or Powershell and run some quick commands. Luckily, issue got resolved and I did not need to perform this step, but, Sitecore team did mention, I had to create a ticket if I need to do this. Also, Sitecore team sets SOLR up when they set up brand new resource group per contract, so, I guess it’s mostly Sitecore that owns SOLR. Only access we had was admin console of SOLR and to be honest, there are so many things you can’t do using the console. Need more command power when it comes to SOLR.

Security Hardening, Session Set up and Tuning

Wondering who would do the whole nine yards – performance tuning, hardening, session state, etc., etc., I did too. When I reached out to Sitecore, they said that is going to be me. I was bummed, why doesn’t this come like already set up was my first thought.

But, since I got the confirmation, I started full speed only to find out, the hard way that session setup was already done by Sitecore teams lol. Just so you do not waste your time understanding behind the scenes what are the correct decisions for out of proc session and so on. By default, CD roles are set to use out of proc session state with Redis as provider. So, no action was needed by me to set anything related to session. Crossed that off my list.

Now coming to security hardening, I noted that some were already in place and done by Managed cloud. How ever most were not done, so, I encourage the regular drill of going over the stuff we typically do noted here for experience platform security chores. I have some detailed notes with me if some one is curious, reach out!

Almost forgot, for tuning none of the old rules apply for Azure SQL. So, it is important to give these links below a read and do what you can:

https://doc.sitecore.com/developers/100/sitecore-experience-manager/en/monitoring-and-maintaining-azure-sql.html

https://kb.sitecore.net/articles/909298

I could actually do a bunch mentioned up on the links above and some are actually again already done. 🙂

I remember also having to do this one – https://kb.sitecore.net/articles/858026 to ensure I overcome a pesky error such as below that kept creeping in my logs. After I did change the value to suggested error did go away.

Whitelist Authoring

This is actually related to some of the steps we need to do on security hardening when authoring is exposed publicly. If authoring is not exposed out there, you can relax some of those requirements, again Sitecore recommends we do those steps regardless, so, I do not want to give incorrect recommendation. But, having only finite set of people get to authoring is much secure anyway. To do this, I read documentation out there, but, was mostly confused, Sitecore Cloud team was sweet enough to actually do this chore for us. Once they actually did it, I figured it was not as complicated as it sounded on documentation. Basically run a PowerShell script it is on documentation to fetch what authoring would need to talk to and whitelist all of them as outbound addresses on CM app service. Plus, they actually added outbound IP addresses of reporting in SQL server firewall. I would still think its smart to ask Sitecore team to do this. You are good with this set up until your app service is not scaled up or down or change the pricing tier in which case. So far, it’s been a month and we see no issues with initial set up done.

Another important thing to note, we have to add authoring IP also on outbound addresses on CM app service, this is because we noted that a request back to authoring from authoring server was not being processed and blocked with forbidden message lol. To avoid this, I had to add couple more outbound IP addresses to list already existing, it was not fun because I had to manually add a bunch more.

Debugging Tricks

Last section of this blog, but, definitely most important one. Minus this knowledge, I felt crippled. It was super hard to keep track of logs using Kudu, timestamps are not right and you never know if you are looking at correct log file. So, I recommend even if it takes couple extra minutes to review latest logs at all times follow steps here

I was so much more confident once I got the handle over latest logs in terms of debugging issues. Also, make sure your app settings are correct so Application Insights always receives the logs properly.

Alright this is it for now! Next up, I will sum up how Sitecore helped me solve an issue in regards to Index worker. Took us a good month or so, but, hey, it is now fixed. This is also very much so related to managed cloud, but, deserves a special blog entry. 🙂

Get your work on to Managed Cloud

release pipeline

If you have not checked out the beginning of my journey with Sitecore Managed Cloud. Please start there to truly understand the basics. Now, if you already know your way around azure and your team has started the real work, you must be real itchy to get some code up there to your azure app services. Let’s roll up our sleeves and just do that.

First and foremost understand the app settings. With managed cloud, it is even more so important to ensure your patch config for settings has the right and expected values to not throw off things you would hardly imagine. Each one has significance and should be paid attention to. For instance, I had incorrect value placed for ‘role:define’ on my CM and it threw off indexing strategies, it was painful to debug. So, do pay attention to all the values. I am putting some samples and some guidelines, but, it all depends again on what your instance need to breathe and do so your right way.

SettingValue
role:defineAny of the below depending on role of the server:
ContentDelivery
ContentManagement
ContentDelivery, Indexing
ContentManagement, Indexing
Processing
Reporting
Standalone
Note – Though ContentManagement is a right value. Depending on your infrastructure set up, you may want to choose ContentManagement, Indexing
env:defineThis is up to you really – depending on how your patch file usage. In my case, I like using them with {envnamerolename}.
Example: PRODCD, UATCM etc.,
search:defineAny of the below:
Solr
Azure
In my case it was Solr
messagingTransport:defineAzureServiceBus
Note – I took a peek at what Sitecore set up for us on this on Managed cloud and ensured we use the same. In UAT, I saw some issues on Marketing automation because I had this value incorrectly set up.
AllowInvalidClientCertificatesYou would think this should be false. But, for some reason it threw off lot of things and Sitecore Managed Cloud team had set this up to be true, so, I left it as ‘True’
useApplicationInsights:defineTrue
with out this logging will not work properly. As you will realize on managed cloud, all logs tie back to Application Insights. So, it is super duper important to have this flag set correctly.
exmEnabled:defineno
Turn this off if not using it. You can always turn this back up when you do choose to use EXM.
aspnet:RequestQueueLimitPerSession25
This value may need to be tuned if you see errors on your logs constantly. Good article
Important application settings to pay attention to

Next step is to set up TDS Web Deploy to ship your code and items to your shiny new Sitecore Managed Cloud instances. You can set this up to be manual or trigger automagically when some one merges to specific branch on Azure Devops. In our case, we had lot of hands at a single given point of time due to super tight timeline, so, we kept this to be manual to ensure we know to make choices depending on the impact. Below are two links that really helped me get through this –

Step 1: Set up TDS project for Web Deploy. Follow the steps here, last step is optional and I do not remember doing that bit. You are good to move to step #2 as long as you see Web Deploy package and PS script in your TDS project bin folder. https://www.teamdevelopmentforsitecore.com/Blog/TDS-5-8-Local-Set-Up

Quick screenshot of where the web deploy output would go on building your solution

Step 2 – Set up and configure Azure build pipeline and release pipeline. Steps noted here are pretty thorough to get you started

https://www.teamdevelopmentforsitecore.com/Blog/TDS-5-8-Configuring-Azure-Build-Pipeline
https://www.teamdevelopmentforsitecore.com/Blog/TDS-5-8-Configuring-Release-Pipelines

Only catch is in release pipeline set up, you will need Azure subscription. This step is a must to connect to your app service of concern and actually deploy stuff. To get this, you have to do couple things in order.

Log a cloud support request titled – “Get Access to Azure – Service Principal”. Give details in regards to your resource group and this type of ticket is usually completed within minutes and you will get information such as below on your ticket.

Note this and keep this handy for next step

Go to azure devops project and click on Project Settings down below on the left. It is hard to find, so, adding a screenshot.

Project Settings

Next click on Service Connections on left menu, you should now see an option to add a new one. Select Azure Resource Manager and pick Service Principal (manual option) and load it up with all the proper values from support ticket resolution comment. Below is a quick table of mapping that could help you.

Azure Service principleValue/Sitecore Support ticket labelNotes
EnvironmentAzure CloudShould be picked by default
Scope LevelSubscriptionShould be picked by default
Subscription IdSubscription IdYou can also get that from Publish Settings File – Subscription Id
Subscription Namesee notesYou can also get that from Publish Settings File – Subscription Name
Service Principal IdApplication(client)IdThis will be on the ticket – get from label titled as noted.
CredentialService principle keyshould be left option and should be default
Service Principal KeyService Principle Password/API KeyThis will be on the ticket – get from label titled as noted.
Tenant IDTenant IDThis will be on the ticket – get from label titled as noted.
Give it a meaningful Service connection name – I like giving {project name- instance type – last 5 of resource group}

Refresh your azure subscription section on release pipeline configuration and you should now see the new option. Select it and now you should see familiar options of App Services on your resource group under App Service Name field. Select that as well and you are pretty much golden. Follow rest of the steps on links noted above and test your build and release pipelines.

If you were lucky like me and you did all your steps right, you should now see your sweat and blood on Sitecore Managed Cloud instance.

App service name and Azure subscription settings on Release configuration

Next up, we will see some ways to protect your code and items, boundaries as to what are you responsible for and what is Sitecore responsible for, debugging tricks. Stay tuned!

Sitecore Managed Cloud – The Beginning

New Beginning

Towards end of last year, I got an opportunity to lead a project which has to go live in a sprint. Basically, a sprint to architect, develop, test and go live. It was something I thought was impossible even if it is MVP version. But, I took it as a challenge and wanted to see how far I can go along side of my team. We had a real steep learning curve here, below are some of them and of course Managed Cloud stood out because this is the first time I was owning the implementation and deployment on such infrastructure.

  • Sitecore 10
  • Brand new Zodiac/Constellation foundation solution
  • Development team who is new to Experience first paradigm
  • Sitecore Managed Cloud
  • Many many more first time things from front end development team as well

When I am given something new to own, learn and deliver. My first thought is to understand the basics no matter how many hours I have to spend on it. With out this, I have found myself struggle, unable to break walls and move/steer the team and myself with confidence in the past. So, I never do it any other way now, I started there and slowly, but, surely I could see myself gaining momentum.

First things First….

Understanding Terminology and Mapping backwards to knowledge I have based on previous experiences. Below are the steps and notes on every first step I took to get us to where we stand right now. It would really help you if this is your first time like me and you do not have direct mentor who you can reach to internally in your company.

Get in there!

To get access to Azure portal, you first need access to Sitecore support portal “Create Service Requests” section. You should reach to your Sitecore direct contact and request them to get you that. Once, you have that, you should see below section on your sitecore support login. More details on the process on this article.

second section that will appear once you have permission to the cloud organization

You should now be able to create service request to get access to resource groups available on the account. You can read more details about this request on Sitecore’s KB article.
This ticket should be processed mostly instantly and now if you login on your Azure portal, you should see Sitecore in your subscriptions

Sitecore subscription now visible on Azure login

Azure Resource Groups

Alright, you are in. First step is sorted. Now what? If you are like me first time inside Paas based Azure, you are probably lost and there is so much documentation out there, but, sometimes too much of it could be confusing. So, here is what you got to do and understand. Every environment that was created on your subscription, typically one for QA, staging/UAT and production is called ‘Resource Group’. If you have sufficient permissions, you can actually create a service request to set up a new resource group for your needs, this request is called ‘Create New Environment‘. Of course, there could be more created or need to be created based on your contract with Sitecore. We had three of them for this specific client. You can access resource groups from your azure portal by clicking on below.

Ensure you have selected Sitecore subscription before clicking this.
Three resource groups one for each environment on contract

Understand what you see

Now, every resource group would have several resources of several types in them. The first time I saw this, I was like – what are all these? I had no idea and I really wanted to understand these before I go to any further steps. So, I created a mapping of every resource to what it means and what is it responsible for.

view of resource group resources. As you can see there would be around 44 of them of different types

I like to build on what I am familiar with, so, I started here, an article that explains what are the different roles on Sitecore architecture. Pay real attention to ‘scaled XP’. Now, all I had to do is to map everything I see on resources back to that diagram. Visual always helps! This understanding is really important to ensure you know what to expect, how to debug and beyond. I took pain of actually mapping each resource and adding a line or two of what it means. I am sharing it in hopes of helping some one else. Check it out here

Next up is to know how to deploy your code and items.

Deploy your custom facet on Sitecore Managed Cloud

We have worked extensively in past couple projects on how to extend xConnect facet, add fun properties to it as needed and tuck it away nicely for later use on all touch down points. Now, this whole process has become second nature. I still remember my first trial at this and I missed various things which wasted more than half a day. 🙂 I do not do rookie mistakes any more. lol

So, traditionally if you need an xConnect facet, you do most common steps on your local or a simple upper environment. This is well documented every where, so, I will not repeat myself on what this process is. Here is one of my blog posts you can refer back to.

Now, what if your upper environment is Sitecore Managed Cloud? Steps are going to be different in this case. So, it is important we know the drill when this is the case:

This link (Sitecore Documentation) helped me quite a bit actually. I would recommend referring this first and then follow the steps below I have as well as additional reference to get you rolling with custom xConnect data.

Steps to actually get it to work:

  1. Deploy your JSON model to all xConnect Collection roles on your resource group
  2. Also, ensure you deploy your JSON on Index worker web job which is usually under xc-search App Service role on your resource group
  3. Drop JSON also on Marketing ops App service.
  4. Ensure the dll that has your custom facet defined and config patch is deployed to all core roles – CM, CD and Processing role App services.
  5. Restart all the xc roles where you dropped JSON and also all core roles to ensure they are reading from all the right places
  6. I would also recommend stopping and starting web job if you are not confident it did not restart when you restarted the app service (xc-search)

And that is it, it should all work fine now. You should be able to ship the information to xConnect extended facet fields as needed. Hope this helps some one who is unsure how to deploy custom extended facet to Managed Cloud!

unveiled a Sitecore connect to Salesforce bug

bug

It is always a great feeling when you did not mess up or cause an issue and it was parceled along with the installation you had done. Can not explain it, but, hey still it was not me. 🙂

So, I bumped in to this issue while doing super thorough tests on new facet fields I just added to my facet and did an effort to push that additional information when available all through the town to Salesforce CRM and to SFMC. While running this tests, I wanted to quickly ensure a Salesforce contact is flagged as “Updated”, so, the next connector run will try and update the record on Sitecore. In effort to flag that I cleared a field that has data in it.

The connector ran as expected, picked up contact as I just edited, but, I did not see field get updated on xConnect. It still had old CRM value and was not updated to Null/Empty. Which is super weird and not expected. At the end of the day, the role of connector is to keep data in sync between platforms. So, empty data should map to empty data and not outdated old data from CRM. This to me is a data integrity issue if it really matters to clients to keep them the contacts truly in sync. So, I loaded a support ticket just to confirm if this is connector current behavior or did I do a grave mistake some where in my setup or configuration or mapping.

Today, Sitecore support confirmed that this indeed is a bug and created a bug on their system. I do not know why, but, I feel like I accomplished some thing by uncovering this pesky data issue. lol Here is the ticket number 441899 and I am all excited to track this probably in couple next releases – KB853187 and KB951718.

Handling different types to map to Salesforce

different things

I recently had a requirement that I was building upon which requires me to store couple additional fields on my xConnect Facet. I had to also show them on Experience profile, push this data when available from Sitecore to Salesforce and back. This additional information should also be available on SFMC when it is available. I knew the drill and did what I had to make this happen. String fields mapping was pretty straight forward and had done it couple times before this drill, so, will not focus on that, you can have a look at my other blog posts which should cover this information.

So, with this requirement we had different types of data that we had to map. For example, we had Boolean, DateTime, Currency and Number to new a few. For every single type of this, we need different handling to ensure the data maps to Salesforce and back successfully. Let’s see how its done:

Step 1: Extend your Facet with all the additional information

        public bool HasDonated { get; set; }
        public DateTime? MostRecentGiveLivelDonationDate { get; set; }
        public DateTime? MostRecentVoicifyDonationDate { get; set; }
        public int EngagementScore { get; set; }
        public decimal GiveLivelyDonationAmount { get; set; }
        public decimal VoicifyDonationAmount { get; set; }

As you can see we have bool, datetime (ensure this is defined nullable, if not, you will be in deep shit and mappings wont work to CRM and SFMC). Now, next up, lets see how mappings should be handled differently and special use of transformers might be needed. Let’s see on each case, how to properly handle. Do note this also depends on data direction, that is, if mapping is from Sitecore to Salesforce or from Salesforce to Sitecore. I will not cover steps needed to ensure this new additional fields are available to be utilized by updating custom collection model json and items. There is a whole different blog post for that.

Boolean Mapping:

This is the easiest one. 🙂 No special handling needed, it should simply work with same process we followed before here

DateTime Mapping:

I would like to stress again how important it is to define this property as nullable. See code snippet above and follow that. Do not waste hours of debugging like I did. 🙂

From Salesforce to Sitecore – See highlighted, it is available OOTB, just ensure you add this to ensure mapping works
From Sitecore to Salesforce – See highlighted, it is available OOTB, just ensure you add this to ensure mapping works

Int mapping:

From Salesforce to Sitecore – See highlighted, this is a new value reader that I had to add to ensure the mapping works and data flows properly.
Definition of value reader that is used as transformer in the step above. It is very important that Target type entered matches .Net type exactly as format shown. If you give just Int or Int32, it will not work.

Note: No special handling is needed to push this type from Sitecore to Salesforce, it just works just fine.

Decimal Mapping:

Very similar to int mapping noted above. I will add few screenshots for reference.

Note: No special handling is needed to push this type from Sitecore to Salesforce, it just works just fine.

Now this might seem pretty straight forward, but, this took me hours to figure out why something is not working as expected. Because there multiple point of failures and I was not able to put a finger on to why. When mappings fail, the logs were not very helpful, so, this required lot of trial and error to actually come up with solution for each of this use case. Hope this helps some one doing some gymnastics with Salesforce and SFMC.

Build Error on TDS Squashed

similar

Remember the feeling when you know you saw a similar error before, but, can not recollect what was done to resolve that? I spent more than half my day battling something very similar that I had seen before, but, no matter what I did from re-collection it did not work. Let me take you through what happened.

Alright, I am trying to set up Azure pipeline to use TDS Web Deploy for very first time. Already first time things, I am usually extra cautious, skeptical and may be not as confident as normal. To top that off, when I struggle to solve something very familiar, it does something to me, I feel like *****. I was burnt out by the end of it, but, good news is that this issue is squashed and I am on to next step.

Okay, here is the story! I was being a good girl and following the steps noted here to get Web Deploy TDS done right via Azure Devops to build and generate artifacts from solution hosted on Azure Devops. I thought I was doing well, until the install of Hedgehog TDS Development Nuget package. So, I opened up Package Manager Console -> selected source as nuget.org and then selected the project to which I need to install the package. It did its thing and seemed alright. For my sanity, I wanted to rebuild the project to which I had done changes and boom, I get a huuuuuge error message which is the error you might have seen at some point of time working with TDS. Full error block below:

The "VerifyTDSVersion" task failed unexpectedly.
System.IO.FileLoadException: API restriction: The assembly 'file:///C:\Users\deepthi.katta\source\repos\project\packages\HedgehogDevelopment.TDS.6.0.0.18\build\HedgehogDevelopment.SitecoreProject.Tasks.dll' has already loaded from a different location. It cannot be loaded from a new location within the same appdomain.
   at System.Reflection.RuntimeAssembly._nLoad(AssemblyName fileName, String codeBase, Evidence assemblySecurity, RuntimeAssembly locationHint, StackCrawlMark& stackMark, IntPtr pPrivHostBinder, Boolean throwOnFileNotFound, Boolean forIntrospection, Boolean suppressSecurityChecks)
   at System.Reflection.RuntimeAssembly.nLoad(AssemblyName fileName, String codeBase, Evidence assemblySecurity, RuntimeAssembly locationHint, StackCrawlMark& stackMark, IntPtr pPrivHostBinder, Boolean throwOnFileNotFound, Boolean forIntrospection, Boolean suppressSecurityChecks)
   at System.Reflection.RuntimeAssembly.InternalLoadAssemblyName(AssemblyName assemblyRef, Evidence assemblySecurity, RuntimeAssembly reqAssembly, StackCrawlMark& stackMark, IntPtr pPrivHostBinder, Boolean throwOnFileNotFound, Boolean forIntrospection, Boolean suppressSecurityChecks)
   at System.Reflection.RuntimeAssembly.InternalLoadFrom(String assemblyFile, Evidence securityEvidence, Byte[] hashValue, AssemblyHashAlgorithm hashAlgorithm, Boolean forIntrospection, Boolean suppressSecurityChecks, StackCrawlMark& stackMark)
   at System.Reflection.Assembly.ReflectionOnlyLoadFrom(String assemblyFile)
   at HedgehogDevelopment.SitecoreProject.Tasks.VerifyTDSVersion.Execute()
   at Microsoft.Build.BackEnd.TaskExecutionHost.Microsoft.Build.BackEnd.ITaskExecutionHost.Execute()
   at Microsoft.Build.BackEnd.TaskBuilder.<ExecuteInstantiatedTask>d__26.MoveNext()	Website.Deploy.TDS	C:\Users\deepthi.katta\source\repos\project\packages\HedgehogDevelopment.TDS.6.0.0.18\build\HedgehogDevelopment.TDS.targets	187	

I can bet my life, I have seen this before. So, I tried re-collecting what I did and also looked up online to find out what I can do to remove this error:

  1. Reloaded the project of concern after unloading it .
  2. Reloaded whole visual studio thinking that would do the magic.
  3. Ensured I matched the Nuget package version to TDS I have on my local. I faintly remembered that could also cause similar error due to conflicting versions.
  4. Removed a new line that got added to project after I installed the package which is below. I know I should not, but, wanted to try it. This worked!!
<Import Project="..\packages\HedgehogDevelopment.TDS.6.0.0.18\build\HedgehogDevelopment.TDS.targets" Condition="Exists('..\packages\HedgehogDevelopment.TDS.6.0.0.18\build\HedgehogDevelopment.TDS.targets')" />

But, I can not remove it though, build on Azure will yell at me and I actually tried it and it literally did yell at me. So, even if the above worked, I can not go with that. I looked at something else on *.csproj file that caught my attention. See below:

<Import Project="$(MSBuildExtensionsPath)\HedgehogDevelopment\SitecoreProject\v9.0\HedgehogDevelopment.SitecoreProject.targets" Condition="Exists('$(MSBuildExtensionsPath)\HedgehogDevelopment\SitecoreProject\v9.0\HedgehogDevelopment.SitecoreProject.targets')" />

I went to the local path and could actually see the HedgehogDevelopment.SitecoreProject.Tasks.dll located in the path noted on import. A quick screenshot below:

This gave me nice clue in right direction. I quickly gathered my patience and did the following in order. Do note that you have to start below on singleton TDS project which does not have internal references to other projects, that should be saved for last. I applied my theory on bundle TDS project and lost hope at first. But, something told me I need to try it again on simple TDS project.

  1. Installed Nuget package via running below command on all TDS projects I have on solution
    Install-Package HedgehogDevelopment.TDS -Version 6.0.0.18
  2. Removed existing import that was loading location above for HedgehogDevelopment.SitecoreProject.targets, I would not need that now that I have all those dll’s loading from Nuget.
  3. Note that #1 and #2 need to be done in specific order. Otherwise, you will get other pesky errors as TDS is thrown off if you remove references before adding one more.
  4. Reload whole Visual studio. Do note that simply reload of project would not help and you will have same error again and again. In my case, I knew I was on right path, but, it did tire me since I tried building as soon as I did above steps.
  5. Rebuild solution and this time you should have successful build if all is well.

After I checked in the above changes, my azure pipeline was a happy camper and was able to generate the artifacts I was after. Yayyyyy!! Moving to next step and I am sure I will face more roadblocks and have to tear them apart. But, this made me real happy on a Friday!!!!

Hope this helps some one save their half day or more!

Data Destination Not Impossible with Sitecore Connect for Salesforce CRM

Sitecore and Salesforce

I took my first steps back in February this year with a mission to connect two amazing systems together – Sitecore and Salesforce. Little did I know back then that I will learn so much, get to break so many walls and reach the final(for now) data destination we had on our mind. It was a fun ride! Not only did I learn about the Sitecore Connect for Salesforce CRM and the scope of what we can do with it, but, also learned a ton of other things with Sitecore xConnect, Analytics, xDB SOLR indexes, custom contact facets, Sitecore Forms, Custom collections and models, Salesforce leads, objects and fields in Salesforce, Data Exchange Framework, custom readers and converters, pipeline batches and scheduled tasks. I mean I can go on and on and on. It was awesome and yet quite painful when things would not work. But, hey, I am here writing this last post on this series that covered how and why I did what I did. I made every effort to reference the blogs/links that inspired and helped me solve a problem on hand. But, I also managed to figure some kinks myself and I documented those to potentially help some one else who is amidst of implementing their data journey between the two systems.

Below are links to all of my blog posts that are part of this series. Hoping this would help any one facing similar challenges or potentially kicked off their journey stitching the two most powerful platforms out there – Sitecore and Salesforce.

Good Luck!

I was also able to connect with Salesforce Marketing Cloud and do some extensions up there as well. I will talk about that side of things in my next blog post. Until then, take it easy. 🙂

Solve hiccups with Sitecore Connect for Salesforce CRM

We managed to ship the whole deal of data flow noted so far in my blog posts related to this journey. When you deploy a new system, the journey does not end so soon. Even if you did your best while implementing, testing and deploying, their could still be unknowns which we will need to keep an eye for and solve to ensure the data continues to flow smoothly between the systems. We had couple of such issues that we needed to battle. Gist of problems and corresponding solutions below:

  • Problem – ‘Contacts stopped syncing to Salesforce’: One not so fine day, the connector broke and contacts did not sync to Salesforce.
  • Debugging & Solution
    This needed a lot of debugging as logs were not helpful. Below is the message on pipeline batch log that is responsible to sync contacts from Sitecore to Salesforce. Lets look at the error
ManagedPoolThread #4 20:05:04 DEBUG [Data Exchange] at System.Threading.Tasks.Task`1.GetResultCore(Boolean waitCompletionNotification)
at Sitecore.DataExchange.Providers.Salesforce.Queues.BaseSubmitQueuePipelineProcessor.SubmitBatch(PipelineStep pipelineStep, PipelineContext pipelineContext, OperationType operationType, List`1 inputList, ILogger logger)
ManagedPoolThread #4 20:05:04 ERROR [Data Exchange] InvalidBatch
ManagedPoolThread #4 20:05:04 DEBUG [Data Exchange] at Salesforce.Common.XmlHttpClient.<HttpGetAsync>d__4`1.MoveNext()
--- End of stack trace from previous location where exception was thrown ---
at System.Runtime.ExceptionServices.ExceptionDispatchInfo.Throw()
at System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(Task task)
at System.Runtime.CompilerServices.TaskAwaiter`1.GetResult()
at Salesforce.Common.XmlHttpClient.<HttpGetAsync>d__0`1.MoveNext()
--- End of stack trace from previous location where exception was thrown ---
at System.Runtime.ExceptionServices.ExceptionDispatchInfo.Throw()
at System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(Task task)
at System.Runtime.CompilerServices.ConfiguredTaskAwaitable`1.ConfiguredTaskAwaiter.GetResult()
at Salesforce.Force.ForceClient.<GetBatchResultAsync>d__52.MoveNext()
--- End of stack trace from previous location where exception was thrown ---
at System.Runtime.ExceptionServices.ExceptionDispatchInfo.Throw()
at System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(Task task)
at System.Runtime.CompilerServices.TaskAwaiter`1.GetResult()
at Salesforce.Force.ForceClient.<GetBatchResultAsync>d__4f.MoveNext()
--- End of stack trace from previous location where exception was thrown ---
at System.Runtime.ExceptionServices.ExceptionDispatchInfo.Throw()
at System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(Task task)
at System.Runtime.CompilerServices.TaskAwaiter`1.GetResult()
at Salesforce.Force.ForceClient.<RunJobAndPollAsync>d__1e`1.MoveNext()

First look at error, Second look at error, I mean looked at it for like two hundred times, I have no clue why it would break. Log entries were not helpful at all in this case. Since all was well couple days ago, it gives a subtle hint that it could be data. I made reports from Experience Analytics and exported all the data in to excel from the day contact syncing had stopped and started looking for anything offbeat. I found the below two entries with few special characters, I had a hunch that that might be causing issues with pipeline batch. I removed the specific field that had special characters from value mappings and ran the pipeline again and boom, the contacts started syncing back again. Also, identified the special character that the process had problem with is ‘&’.

Now, we can not remove the mapping permanently, we need to find a solution to avoid this situation on any other field mappings and data collected on xConnect contact in general. To do this, we need a new reader and converter that do exactly what you are guessing – replace our bad character with space. We would also need a template to make this happen. Below are key components needed for this solution:

  • Template that has place to enter the character and points to custom reader defined on code.
New Template for reader that accepts character of concern
Standard Values of new template which has default character and Converter type that points to custom namespace where model resides and dll corresponding to the same
  • Next is to actually define the Converter and Reader. I included the code for both below
using Sitecore.DataExchange.DataAccess;
using Sitecore.DependencyInjection;
using Sitecore.Marketing.Definitions;
using Sitecore.XConnect;
using System;
using System.Globalization;

namespace yournamespace
{
    public class ReplaceCharacterValueReader : IValueReader
    {

        public string SpecialCharacter { get; protected set; }

        public ReplaceCharacterValueReader(string character)
        {
            if (string.IsNullOrEmpty(character))
                throw new ArgumentOutOfRangeException(nameof(character), (object)character, "Empty Character");
            this.SpecialCharacter = character;
        }

        public virtual ReadResult Read(object source, DataAccessContext context)
        {
            if (!(source is string str))
                return ReadResult.NegativeResult(DateTime.Now);
            return string.IsNullOrWhiteSpace(str) || !str.Contains(SpecialCharacter) ? ReadResult.PositiveResult(source, DateTime.Now) : ReadResult.PositiveResult((object)str.Replace(SpecialCharacter,string.Empty), DateTime.Now);
        }
    }
}
using Sitecore.DataExchange;
using Sitecore.DataExchange.Attributes;
using Sitecore.DataExchange.Converters;
using Sitecore.DataExchange.DataAccess;
using Sitecore.DataExchange.Repositories;
using Sitecore.Services.Core.Model;

namespace yournamespace
{
    [SupportedIds(new string[] { "{7ABD90E0-5FC4-4547-8962-C0094F8A7CF7}" })]
    public class ReplaceCharacterValueReaderConverter : BaseItemModelConverter<IValueReader>
    {

        public const string FieldNameCharacter = "Character";

        public ReplaceCharacterValueReaderConverter(IItemModelRepository repository)
          : base(repository)
        {
        }

        protected override ConvertResult<IValueReader> ConvertSupportedItem(
          ItemModel source)
        {
            return this.PositiveResult((IValueReader)new ReplaceCharacterValueReader(this.GetStringValue(source, FieldNameCharacter)));
        }
    }
    }
  • Last piece to the puzzle is to actually use a reader created based on template above as source value transformer on value mappings which could potentially have this special character or where applicable. In our case ‘Title’ field could definitely lead to this character. So, we ensured we plugged above in for this value mapping for sure.
Source Value Transformer injected on mapping of concern

That is it, we finally do not have to worry about this pesky little character stopping the data flow.

  • Problem: I noticed an issue on Experience Profile where on contacts instead of seeing ‘Preferred’ on Contact Email address, it started showing $name. Like below:
  • Debugging and Solution: It was important in this case to understand when does the issue happen. Does it happen when we submit a form and load xConnect contact information? or does it happen later when sync job runs. In our case, it was happening when pipeline batch that is responsible to sync data from Salesforce to Sitecore runs. Now that we know this, we have to investigate value mappings and value accessors responsible for this. Upon investigating in that path, below is what I see set on this item /sitecore/system/Data Exchange/my tenant/Value Mapping Sets/Salesforce to xConnect Contact Mappings/Salesforce Contact to xConnect Contact Emails Facet/Preferred Key. This definitely did not seem right, see screenshot below:
Incorrect mapping on Preferred Key

Swapped the above with a new reader created which will constantly return a string called ‘Preferred’ which is what we need. This did the trick and we did not see the ‘$name’ issue any more.

Swapped with new constant value reader that will always return string prefferred

Showcase Shiny new Info on Experience Profile

In my last blog, we talked about how we could extend xConnect contact facet with additional information and add details to it when a user submits a form for instance. We can check to ensure the data is saved properly by running few quick SQL commands like below.

SELECT TOP (1000) [ContactId]
      ,[FacetKey]
      ,[LastModified]
      ,[ConcurrencyToken]
      ,[FacetData]
  FROM [notimpossible_Xdb.Collection.Shard0].[xdb_collection].[ContactFacets]
  where FacetKey ='SalesforceAccount'

You should see a string in FacetData similar to below. It should match definition of custom facet that was deployed. In my case, it looked like:

{"@odata.type":"#Website.Areas.MyProject.Models.DataExchange.ContactFacets.SalesforceAdditionalInformation","Organization":"verndale test","Industry":"Financial Services","Website":"www.test.com","Involvement":"","SelectedCity":"","ProposedCity":"","AdditionalDetails":"testing preferred key yet again"}

Though through above you can check to see if the information you would like is being stored on contact. But, a more elegant way would be to actually see it on a special tab on Experience profile. There are couple blogs out there that can help you with this especially if the goal is to use Speak. But, if you do not care about using Speak, do check out this one. In my case, I wanted to stick with Speak to see how far I could go with this. It was quite a number of steps, but, a combination of resources and special steps helped me get there.

References that helped me:

https://www.konabos.com/blog/extending-sitecore-experience-profile-in-sitecore-9 -> This helped me get started and keep moving in right path till I reached this below. The link noted in there was a lot of information to digest and understand how to proceed next. I wish that area was expounded probably on another blog post. It would have probably helped me knock this down continuing in the same path.

Step where I stumbled and did not know how to proceed

Well, since I was able to come this far, I did not want to give up. I kept looking for something that could lead me to keep moving. I finally found this one that helped me fill in the gaps. This is that magic blog: https://xcentium.com/blog/2019/09/23/view-custom-facets-values-from-within-sitecore-experience-profile

Step #4 on this blog gave me an idea to duplicate the tab and adjust presentation using Sitecore rocks as needed instead of creating brand new items and scramble what should be added to presentation. One gap in this blog which was not mentioned is – when you duplicate the Tab Item, ensure you change references in presentation on new tab item to point to appropriate internal items within the new tab item. For example, below needed to be swapped to ensure I do not break existing details tab and to ensure I show values that I would need to show on my new tab.

Change the above highlighted to self reference the newly created Tab
Change ID here to new TabID. If you leave it to old one, your existing details tab could potentially not work. 🙂
On LoadOnDemandPanel, ensure the itemid matches the ID of DetailsPanel under newly created tab. This should be swapped using Sitecore Rocks.
Change Target Control ID as well to ID of Border given on presentation.

Remember to change ID’s on all presentation components on Tab item. This is to ensure no overlap with existing Details Tab item. If I did not do above, my original details tab was broken and was not working.

Now that we finished changing presentation of main Tab item, lets move to Details Panel Item which is also based on Tab Template -‘/sitecore/client/Business Component Library/version 1/Templates/Common/Tab’. Open this item up on Sitecore rocks and play with presentation to update/inject new items. Also, ensure to add new datasource items and swap existing references to newly created items under details panel. Finally, my presentation looked like below.

Highlighted ones are actually new additional fields that I would like to show on my Additional Information Tab on Experience Profile.

Do not forget to add new label data sources needed. I took inspiration as to where the existing ones are and dropped my news ones in the path below: ‘/sitecore/client/Applications/ExperienceProfile/Common/System/Texts’

I added additional css needed for this tab by adding a new style sheet item as shown below

Additional css needed for this tab and new elements

Finally, last step is to hone our js that is needed to make the right calls and read/display from JSON data. To configure which JS to use, edit the Subcode presentation component’s details.

Add file path to shiny new JS
//Here is JS Code for my new tab
define(["sitecore", "/-/speak/v1/experienceprofile/DataProviderHelper.js", "/-/speak/v1/experienceprofile/CintelUtl.js"], function (sc, providerHelper, cintelUtil) {
    var intelPath = "/intel",
        dataSetProperty = "dataSet";

    var cidParam = "cid";
    var intelPath = "/intel";
    var getTypeValue = function (preffered, all) {
        if (preffered.Key) {
            return { Key: preffered.Key, Value: preffered.Value };
        } else if (all.length > 0) {
            return { Key: all[0].Key, Value: all[0].Value };
        }

        return null;
    };

    var app = sc.Definitions.App.extend({
        initialized: function () {
            var transformers = $.map(
                [
                    "default"
                ], function (tableName) {
                    return { urlKey: intelPath + "/" + tableName + "?", headerValue: tableName };
                });

            providerHelper.setupHeaders(transformers);
            providerHelper.addDefaultTransformerKey();
            this.setupContactDetail();
            this.setupContactAdditionalDetails();

        },

        setEmail: function (textControl, email) {
            if (email && email.indexOf("@") > -1) {
                cintelUtil.setText(textControl, "", true);
                textControl.viewModel.$el.html('<a href="mailto:' + email + '">' + email + '</a>');
            } else {
                cintelUtil.setText(textControl, email, true);
            }
        },

        setupContactAdditionalDetails: function () {
            var contactId = cintelUtil.getQueryParam(cidParam);
            var tableName = "additionalcontactinfo";
            var baseUrl = "/sitecore/api/ao/v1/contacts/" + contactId + "/intel/" + tableName;

            providerHelper.initProvider(this.ContactDetailsDataProvider,
                tableName,
                baseUrl,
                this.DetailsTabMessageBar);

            providerHelper.getData(this.ContactDetailsDataProvider,
                $.proxy(function (jsonData) {
                    if (jsonData.data.dataSet != null && jsonData.data.dataSet.additionalcontactinfo.length > 0) {
                        // Data present set value content
                        var dataSet = jsonData.data.dataSet.additionalcontactinfo[0];
                        cintelUtil.setText(this.OrganizationValue, dataSet.Organization, false);
                        cintelUtil.setText(this.IndustryValue, dataSet.Industry, false);
                        cintelUtil.setText(this.WebsiteValue, dataSet.Website, false);
                        cintelUtil.setText(this.InvolvementValue, dataSet.Involvement, false);
                        cintelUtil.setText(this.SelectedCityValue, dataSet.SelectedCity, false);
                        cintelUtil.setText(this.ProposedCityValue, dataSet.ProposedCity, false);
                        cintelUtil.setText(this.AdditionalDetailsValue, dataSet.AdditionalDetails, false);
                    }
                }, this));
        },


        setupContactDetail: function () {
            var getFullAddress = function (data) {
                var addressParts = [
                    data.streetLine1,
                    data.streetLine2,
                    data.streetLine3,
                    data.streetLine4,
                    data.city,
                    data.country,
                    data.postalCode
                ];

                addressParts = $.map(addressParts, function (val) { return val ? val : null; });
                return addressParts.join(", ");
            };

            providerHelper.initProvider(this.ContactDetailsDataProvider, "", sc.Contact.baseUrl, this.DetailsTabMessageBar);
            providerHelper.getData(
                this.ContactDetailsDataProvider,
                $.proxy(function (jsonData) {
                    this.ContactDetailsDataProvider.set(dataSetProperty, jsonData);
                    var dataSet = this.ContactDetailsDataProvider.get(dataSetProperty);
                    var email = getTypeValue(jsonData.preferredEmailAddress, dataSet.emailAddresses);
                    if (jsonData.emailAddresses.length === 0 && email != null)
                        jsonData.emailAddresses.push(email);

                    var phone = getTypeValue(jsonData.preferredPhoneNumber, dataSet.phoneNumbers);
                    if (jsonData.phoneNumbers.length === 0 && phone != null)
                        jsonData.phoneNumbers.push(phone);

                    var address = getTypeValue(jsonData.preferredAddress, dataSet.addresses);
                    if (jsonData.addresses.length === 0 && address != null)
                        jsonData.addresses.push(address);

                    this.EmailColumnDataRepeater.viewModel.addData(jsonData.emailAddresses);
                    this.PhoneColumnDataRepeater.viewModel.addData(jsonData.phoneNumbers);
                    this.AddressColumnDataRepeater.viewModel.addData(jsonData.addresses);

                    cintelUtil.setText(this.FirstNameValue, jsonData.firstName, false);
                    cintelUtil.setText(this.MiddleNameValue, jsonData.middleName, false);
                    cintelUtil.setText(this.LastNameValue, jsonData.surName, false);

                    cintelUtil.setTitle(this.FirstNameValue, jsonData.firstName);
                    cintelUtil.setTitle(this.MiddleNameValue, jsonData.middleName);
                    cintelUtil.setTitle(this.LastNameValue, jsonData.surName);

                    cintelUtil.setText(this.TitleValue, jsonData.jobTitle, false);

                    cintelUtil.setText(this.GenderValue, jsonData.gender, false);
                    cintelUtil.setText(this.BirthdayValue, jsonData.formattedBirthDate, false);

                    if (email) {
                        cintelUtil.setText(this.PrimeEmailType, email.Key, true);
                        this.setEmail(this.PrimeEmailValue, email.Value.SmtpAddress);
                        cintelUtil.setTitle(this.PrimeEmailValue, email.Value.SmtpAddress);
                    }

                    if (phone) {
                        cintelUtil.setText(this.PrimePhoneType, phone.Key, true);
                        cintelUtil.setText(this.PrimePhoneValue, cintelUtil.getFullTelephone(phone.Value), true);
                    }

                    if (address) {
                        cintelUtil.setText(this.PrimeAddressType, address.Key, true);
                        cintelUtil.setText(this.PrimeAddressValue, getFullAddress(address.Value), true);
                    }
                }, this)
            );

            this.EmailColumnDataRepeater.on("subAppLoaded", function (args) {
                cintelUtil.setText(args.app.Type, args.data.Key, true);
                this.setEmail(args.app.Value, args.data.Value.SmtpAddress);
                cintelUtil.setTitle(args.app.Value, args.data.Value.SmtpAddress);
            }, this);

            this.PhoneColumnDataRepeater.on("subAppLoaded", function (args) {
                cintelUtil.setText(args.app.Type, args.data.Key, true);
                cintelUtil.setText(args.app.Value, cintelUtil.getFullTelephone(args.data.Value), true);
            }, this);

            this.AddressColumnDataRepeater.on("subAppLoaded", function (args) {
                cintelUtil.setText(args.app.Type, args.data.Key, true);
                cintelUtil.setText(args.app.Value, getFullAddress(args.data.Value), true);
            }, this);
        }
    });
    return app;
});

Only difference in code between the above and out of the box details tab is the method this.setupContactAdditionalDetails(). This function is wired up to call newly configured end point with tablename ‘additionalcontactinfo’ This is essentially the specific name you have given in your configuration element noted below.

<configuration xmlns:patch="http://www.sitecore.net/xmlconfig/">
  <sitecore>
    <pipelines>
      <group groupName="ExperienceProfileContactViews">
        <pipelines>
          <additionalcontactinfo>
            <processor type="Website.Pipelines.ContactFacets.SalesforceExtensions.ConstructSalesforceDataTable,Website" />
            <processor type="Website.Pipelines.ContactFacets.SalesforceExtensions.GetSalesforceDataActions,Website"/>
            <processor type="Sitecore.Cintel.Reporting.Processors.ApplySorting, Sitecore.Cintel"/>
            <processor type="Sitecore.Cintel.Reporting.Processors.ApplyPaging, Sitecore.Cintel"/>
          </additionalcontactinfo>
        </pipelines>
      </group>
    </pipelines>
  </sitecore>
</configuration>

That is it, if all goes well and you did the configuration and steps mentioned above correctly you should see the new tab loaded with information you need.

New Tab showing all the additional information

That is it, now I don’t have to open SQL Manager or run a query to check and ensure the data is updated fine in Sitecore when form is submitted with all data or when Salesforce is updated and synced back to Sitecore. Happy and Productive!

Next up, few hiccups post live and how we solved it.

Shipping custom information to Salesforce

What would you do when basic foundation of what you would like to achieve is done and is done right? I happen to never get an opportunity to say thats it! There is always a next page, next step and moving on to next challenge. To be honest, I do not want it any other way, love to keep building next awesome things. 🙂

That is exactly what happened when I wrapped up the data flow that I noted on my post here. To continue on path to enhance and extend the data we had to look at variety of other information that we collect from end users on our forms. We want to ship this rich data back to Salesforce, I mean why not right? So, let’s see how we could do that.

Firstly, Sitecore has some awesome documentation around how you can get started to extend both Sitecore contact and Salesforce to accommodate your data needs. As always I will not repeat the information, but, will try to highlight the challenges that I had to resolve to make this happen. Also, fill in any gaps that are not covered in documentation here.

Abstract steps we need to do to get this rolling:
1. Add fields needed on Contact in Salesforce
2. Extend XConnect contact with new fields needed based on longer forms
3. Extend models on collection, follow steps noted here to extend facets on both XConnect and extend models of Salesforce on DEF:
https://doc.sitecore.com/developers/salesforce-connect/21/sitecore-connect-for-salesforce-crm/en/walkthrough–creating-a-custom-contact-facet.html
4. Add additional mapping on forms, so, the back-end can retrieve values entered on form fields that are noted.
5. Ensure XConnect contact has the new info
6. Run pipelines needed.
7. Ensure salesforce has the new data
8. Backward flow of data from Salesforce to Sitecore on new facet fields

Few Gaps:

  • In Documentation, it does not call out that it is needed to click on the option available called ‘Convert Model in to Items’ after a custom model item is created and corresponding path to namespace and dll is set on field ‘Collection Model Type’. This is really important, if you do not do this stuff will simply not work. See screenshot for guidance.
Click on highlighted option to create items from model.
  • Ensure new custom facet is selected on Contact Facets under pipeline step: /sitecore/system/Data Exchange/yourtenant/Pipelines/xConnect Contacts and Interactions to Salesforce Sync/Custom Read Contacts from xConnect Pipeline/Read Contacts from xConnect
Pick facets to read, otherwise new mappings will not work
  • Also, note if you try to package your custom model located under /sitecore/system/Settings/Data Exchange/Providers/xConnect/Collection Models/Custom Models/ to upper environments, the mappings do not work. It has to be created from each environments and all references to these items should be changed/swapped as well. If you do not do this, the mappings might not work. It took me lot of back tracking to figure this out. It is painful to change references, but, most references to custom model specific items should be in here ->
    /sitecore/system/Data Exchange/your tenant/Data Access/Value Accessor Sets/Providers/xConnect/xConnect Contact Additional Information Facet OR which ever name you used to create item based on ‘xConnect Entity Facet Value Accessor Set’ while following instructions here

That is all! if you are lucky enough, new value mappings added to location -> /sitecore/system/Data Exchange/your tenant/Value Mapping Sets/xConnect Contact to Salesforce Contact Mappings/Contact Model to Salesforce Contact should work. Good luck!

Now, lets ensure we know what to do while ensuring the additional data can flow backwards as well which is from Salesforce to Sitecore. Most of the steps should be covered as part of link from Sitecore above. But, again few things below which need to be done to ensure this works.

  • Ensure on pipeline step ‘Read Contacts From Salesforce’ below, its pointing to your tenant Salesforce Contact Fields. In my case, it was pointing to branch specific fields when I toggle the view to show raw fields.
Ensure it points to correct set of extended fields from your tenant
  •  Ensure on below highlighted new custom facet is picked otherwise error on logs “Already Exists” while updating custom facet.

That is all! If all goes well, you should now the custom xConnect contact data back in Salesforce and vice versa. Now, what? Well, we can check to see if the data is updating properly on xConnect shard databases, but, why not show this additional values on Experience Profile for ease of use? Also, we had a hiccup on previous functionality that was shipped live, will see how we debugged and resolved the same. We will see how to do these in next set of blogs.

Implement data flow with Sitecore Connect for Salesforce CRM

I literally jumped with enthusiasm when I got the connector working. I could now see my xConnect contacts back in Salesforce. What a joy to see the data flowing. I enjoyed the victory for almost a day and I know we had long way to go before we could ship this out live. So, I moved on to next set of challenges and tasks on hand. I am a hugely inclined towards algorithmic solving to any problem on hand. I truly believe in – If I know how to solve it, half of the problem is solved. 🙂

Below is what we were hoping to do with data flow:

  • Sitecore Forms: On our Sitecore website, we had bunch of forms, all behind the scenes using Sitecore Forms. We were using custom submit action step to send an email to the end user who submitted the form. Now, we wanted to actually save contact in Sitecore when a form is submitted, To achieve this, I followed instructions noted here: https://doc.sitecore.com/developers/91/sitecore-experience-manager/en/walkthrough–creating-a-custom-submit-action-that-updates-contact-details.html
    Special Note: While you follow the steps on document above, especially while using Sitecore Rocks, ensure to remember that renderings you pick, they should all be speak 2, if you pick something of speak 1 version, it might not work

    Once you successfully complete this setup and apply mappings to form fields back in Form editor, if you are lucky, you will see your form submission data on Experience Profile.
Mappings on UpdateContact Submit action
  • Implement fallback: Remember their could be fields that are not required from Sitecore or forms perspective, but, could be required from Salesforce perspective. One such field is Last Name in our case. To ensure we push these contacts out to Salesforce, we implemented a fall back reader and hooked this up on Value Mapping configuration.
Fall Back reader which will try reading the raw value and then will read constant value such as ‘unknown’ if raw value is empty
Then use the reader in source value transformer on value mapping corresponding to last name

At this point, it is important to ensure data can flow in either direction. From Sitecore to Salesforce via pipeline batch run ‘XConnect Contacts to Salesforce’ and from Salesforce to Sitecore via pipeline batch run ‘Salesforce Contacts to xConnect Sync’

  • If all is well we should be ready to wire up scheduled job that will try to keep the two systems in sync as often as needed. More information here: https://doc.sitecore.com/developers/def/20/data-exchange-framework/en/scheduling-a-pipeline-batch.html
Create a new command based on ‘/sitecore/templates/Data Exchange/Framework/Tasks/Run Selected Pipeline Batches Command’
Create a new schedule and make this run in interval as needed

Time to take this to level up. Lets send more information both to contact on Sitecore and back in Salesforce. To do this we must extend the Contact Facet and also figure out how we can map this additional information back to Salesforce. That would be my next blog.

Installation caveats for Sitecore Connect for Salesforce CRM

This post is continuation to my last one in regards to first steps, you can check it out here. Now, that you made all the decisions, hopefully all the right ones, then, you would go to the immediate next step which is -“Get it installed and start pushing the data”. Sitecore as always tried their best to provide clear instructions. You will find basic instructions there, but, like always there are few gaps which you can only learn by actually doing it. 🙂

In this post, I will cover few gaps and how I worked around, figured it out and finally got the connector to work.

First and most important thing is you do not have to google to install the packages that are needed for the connector to work. Downloads section on official link of Sitecore Connect actually has all packages. When I was reviewing the installation guide, it lists all of the packages that need to installed and my quick instinct was to do a Google Search lol. I wish Sitecore just added a helpful hint of where to find them to make my life easy. 🙂 But, if you are doing this step, look no further than here

As we follow along the instructions on the guide, one important thing to call out for section 5.2 which is in regards to adding connection string for salesforce in sitecore instance. There is a note right at the bottom of this section and if you are like me may be you will skip right over it and that is why wanted to advise caution to not skip it. 🙂 In my case, I did create a sandbox and I would personally always recommend to create a sandbox for lower environments just to ensure data is kept clean on production instance of Salesforce.

Note from Documentation for section 5.2 on Installation guide

Now, lets say you finished all the steps noted on Installation guide to the tea. But, you quickly find you are stuck at step 5.5. This step is about configuring your endpoints on newly created tenant and ensure the trouble shooter successfully runs. Always remember after model is generated (json file) and it is dropped on locations the guide calls out for, this step below should not be forgotten, if it was, the newly generated model will not be available in ‘Show Deployed Models’ under xConnect Client end point. So, here is the magic step:

“Restart all windows services and do a app pool recycle on IIS xConnect site and your main website instance as well(why not)”

Now, when you click on ‘Deployed Models’ on XConnect endpoint you should see the shiny new model and the trouble shooter will also run just fine. A screenshot for reference:

Showing view of ‘Show Deployed Models’ ribbon click on xConnect Client Endpoint
view showing click on ‘Run Troubleshooter’ on xConnect Client End point

We all assume to some extent, right? We should not blame Sitecore for assuming we all know that we do that many installation steps and line up several different package installations the assumption is we do a decent size publish to ensure all the good stuff we just loaded gets to web database as well. It is important to do this just in case if there are any dependencies they all make it to destination live database as well.

Now, this bits below took a lot of toll on me because in my case Sitecore instance was not set up correctly on upper environments with regards to xConnect. Had to do some SOLR related corrections. I will not go in to super detail around what happened, but, just on abstract level those indexes and configurations of indexes for xConnect were not done right. But, bottom line for this installation to work as expected below should be ensured and assured.

  • Experience Profile should load xConnect contacts with out any errors
  • Ensure your xconnect logs are clean – even basic global key access denied errors should be cleaned upAdd IIS Apppool\xconnectapppool name to performance related groups.
showing user groups that need permissions set up

Now, last but not least a pesky little issue that ate more than half of my day. I ensured all steps were a wrap everything was configured per plan, but, when I ran the pipeline batch ‘xConnect Contacts to Salesforce Sync’, nothing happened. Logs generated were super not helpful and almost felt like it failed before even it started. Which brought my attention to end points, but, wait, I just verified that my end points troubleshooters ran fine for all three highlighted below:

After hours of scrambling, I found that my individual pipeline steps were using branch specific end point references which obviously will fail because we did not configure them, we configured tenant specific end points. The only way I could find this out was using ‘View Raw’ option in ribbon and searching that GUID otherwise from the regular view everything would look alright as the names are same between tenant and branch. I had similar issues for references to other pipelines within pipelines. For instance, the below pipeline step highlighted below was actually from default branch and not from tenant, so, any changes I made to my pipeline step or end point configuration was not being read and instead batch was reading branch content.

I am just glad I could find this issue. Now, it could be because I am the unlucky one, but, this is what made me look more deeply at the DEF tenant and pipeline steps on a more intimate level, so, yeah, it was annoying, but, yep it was worth it. 🙂

After initial connectivity is proven and core data was being pushed to Salesforce from Sitecore, we are on to the next step which is attacking custom implementation that would be needed to meet our end goals and requirements. It was sigh of relief that the connector is working and yet deep suspense by the end of this as I was aware of the next adventures. I will continue this blog series to depict the journey that happened next.