Monday 26 October 2020

Transaction Control (SavePoint & Rollback)

 Hi,

We are going to learn what is SavePoint &  Rollback along with limitations.

savePoint:

A point in the request that specifies the state of the database at that time. Any DML statement that occurs after the savepoint can be discarded, and the database can be restored to the same condition it was in at the time you generated the savepoint.

The savePoint statement helps to identify a point in a transaction to which you can later roll back.

rollback:

It helps to roll back the transaction based on savePoint.

Eg:

Account a = new Account(Name = 'xxx'); 

insert a;

System.assertEquals(null, [SELECT AccountNumber FROM Account WHERE Id = :a.Id].

                           AccountNumber);

// Create a savepoint while AccountNumber is null

Savepoint sp = Database.setSavepoint();

//Change the account number

a.AccountNumber = '123';

update a;

System.assertEquals('123', [SELECT AccountNumber FROM Account WHERE Id = :a.Id].

                             AccountNumber);

// Rollback to the previous null value

Database.rollback(sp);

System.assertEquals(null, [SELECT AccountNumber FROM Account WHERE Id = :a.Id].

                            AccountNumber);


Limitations apply to generating savepoint variables and rolling back the database:

  • If you set more than one savepoint, then roll back to a savepoint that is not the last savepoint you generated, the later savepoint variables become invalid. For example, if you generated savepoint SP1 first, savepoint SP2 after that, and then you rolled back to SP1, the variable SP2 would no longer be valid. You will receive a runtime error if you try to use it.
  • References to savepoints cannot cross trigger invocations because each trigger invocation is a new trigger context. If you declare a savepoint as a static variable then try to use it across trigger contexts, you will receive a run-time error.
  • Each savepoint you set counts against the governor limit for DML statements.
  • Static variables are not reverted during a rollback. If you try to run the trigger again, the static variables retain the values from the first run.

  • Each rollback counts against the governor limit for DML statements. You will receive a runtime error if you try to rollback the database additional times.
  • The ID on an sObject inserted after setting a savepoint is not cleared after a rollback. Create an sObject to insert after a rollback. Attempting to insert the sObject using the variable created before the rollback fails because the sObject variable has an ID. Updating or upserting the sObject using the same variable also fails because the sObject is not in the database and, thus, cannot be updated.


Reference:

https://developer.salesforce.com/docs/atlas.en-us.apexcode.meta/apexcode/langCon_apex_transaction_control.htm

Sunday 4 October 2020

How to Tune Data Relationships and Updates for Performance

 Hi,

Let's have a look into a few points below to Tune Data Relationships and updates for Performance.

We always need to understand the performance characteristics of the various maintenance operations that we are performing and always test substantial data uploads and changes to object relationships in a sandbox environment so we know what to expect.

Here are some specific suggestions.
  • Use a Public Read Only or Read/Write organization-wide default sharing model for all non-confidential data.
  • To avoid creating implicit shares, configure child objects to be Controlled by Parent wherever this configuration meets security requirements.
  • Configure parent-child relationships with no more than 10,000 children to one parent record.
  • If you are encountering only occasional locking errors, see if the addition of retry logic is sufficient to solve the problem.
  • Sequence operations on parent and child objects by ParentID and ensure that different threads are operating on unique sets of records.
  • Tune your updates for maximum throughput by working with batch sizes, timeout values, the Bulk API, and other performance-optimizing techniques.

Reference:
https://developer.salesforce.com/docs/atlas.en-us.draes.meta/draes/draes_object_relationships_takeaway.htm?search_text=Skew

Saturday 3 October 2020

Implicit Sharing

 Hi,

Here we are going to learn about Implicit Sharing.

The sharing capabilities of the Lightning Platform include a wide variety of features that administrators can use to explicitly grant access to data for individuals and groups.

In addition to these more familiar functions, there are a number of sharing behaviors that are built into Salesforce applications. This kind of sharing is called implicit because it’s not configured by administrators; it’s defined and maintained by the system to support collaboration among members of sales teams, customer service representatives, and clients or customers.

Let's have a look into the following table which describes the different kinds of implicit sharing built into Salesforce applications and the record access that each kind provides.

Type of SharingProvidesDetails
ParentRead-only access to the parent account for a user with access to a child record
  • Not used when sharing on the child is controlled by its parent
  • Expensive to maintain with many account children
  • When a user loses access to a child, Salesforce needs to check all other children to see if it can delete the implicit parent.
ChildAccess to child records for the owner of the parent account
  • Not used when sharing on the child is controlled by its parent
  • Controlled by child access settings for the account owner’s role
  • Supports account sharing rules that grant child record access
  • Supports account team access based on team settings
  • When a user loses access to the parent, Salesforce needs to remove all the implicit children for that user.
PortalAccess to portal account and all associated contacts for all portal users under that accountShared to the lowest role under the portal account
High Volume1Access to data owned by high volume users associated with a sharing set for users member of the sharing set's access groupAll members of the sharing set access group gain access to every record owned by every high volume user associated with that sharing set
High Volume ParentRead only access to the parent account of records shared through a sharing set's access group for users member of the groupMaintains the ability to see the parent account when users are given access to account children owned by high volume users
To allow portal users to scale into the millions, Community users have a streamlined sharing model that does not rely on roles or groups, and functions similarly to calendar events and activities. Community users are provisioned with the Service Cloud Portal or Authenticated Website licenses.


Reference:
https://developer.salesforce.com/docs/atlas.en-us.draes.meta/draes/draes_object_relationships_implicit_sharing.htm?search_text=Skew

Parent-Child Data Skew

 Hi,

Let's learn about Parent-Child Data Skew.

A common configuration that can lead to poor performance is the association of a large number of child records (10,000 or more) with a single parent account. 

Eg: A customer can have tens or hundreds of thousands of contacts generated by marketing campaigns or purchased from mailing lists—without any association to formal business accounts. If contact is required to have an associated account, what should an administrator do? It might be convenient to park all those unallocated contacts under a single dummy account until their real business value and relationship can be determined.

While this option seems reasonable, this kind of parent-child data skew can cause serious performance problems in the maintenance of implicit sharing.

Problem #1: Losing Access to a Child Record Under a Skewed Account

Assume that we have 300,000 unallocated contacts all under the same account. A user with access to one of these contacts will also have a parent implicit share in the account sharing table that gives him or her access to that account. Now, what happens if that user loses access to the contact?

In order to determine whether to remove his or her sharing to the account, Salesforce needs to scan all of the other 299,999 contacts to ensure that the user doesn’t have access to them either. This practice can become expensive if Salesforce is processing a lot of visibility changes on these highly skewed accounts.

Problem #2: Losing Access to the Skewed Parent Account


Consider the opposite scenario: The user has access to all 300,000 contacts because of his or her access to their parent account. What happens when the user loses access to the account?

This situation is not as problematic because the user must lose access to all the child records. Salesforce can query that list very quickly, but if there are very many child records, it might still take substantial time to delete all the relevant rows from the sharing tables for all the child objects.
Configuring a severe data skew on an account can also cause issues when customers make large-scale changes in sharing or realign sales assignments in Territory Management.

Eg: If the account is part of the source group for a sharing rule, and the administrator recalculates sharing on accounts, the work required to adjust the child entity access for that one account can cause the recalculation to become a long-running transaction or, in extreme cases, to fail altogether. Similar problems can occur when a territory realignment process attempts to evaluate assignment rules for a skewed account.


Reference:
https://developer.salesforce.com/docs/atlas.en-us.draes.meta/draes/draes_object_relationships_parent_child_data_skew.htm?search_text=Skew


Friday 2 October 2020

Ownership Data Skew

 Hi,

Let's see what is ownership Data Skew.

We have different types of  Data Skews. 

  • Ownership Data Skew
  • Parent-Child Data Skew
Here we are going to discuss "Ownership Data Skew".

What is Ownership Data Skew?
When a single user owns more than 10,000 records of an object, we call that condition ownership data skew


One of the common patterns involves customers concentrating ownership of data so that a single user or queue, or all the members of a single role or public group, owns most or all of the records for a particular object.

Eg:

A customer can assign all of his or her unassigned leads to a dummy user. This practice might seem like a convenient way to park unused data, but it can cause performance issues if those users are moved around the hierarchy, or if they are moved into or out of a role or group that is the source group for a sharing rule. In both cases, Salesforce must adjust a very large number of entries in the sharing tables, which can lead to a long-running recalculation of access rights.


Distributing ownership of records across a greater number of users will decrease the chance of long-running updates occurring..

We can take the same approach when dealing with a large amount of data that is owned by or visible to the users under a single portal account—changing the owner of that account or moving those users in the hierarchy requires the system to recalculate all the sharing and inheritance for all the data under the account.


If we do have a compelling reason for assigning ownership to a small number of users, we can minimize possible performance impacts by not assigning the user(s) to a role.

If the user(s) must have a role to share data, Salesforce recommends that we have to:
  • Place them in a separate role at the top of the hierarchy
  • Not move them out of that top-level role
  • Keep them out of public groups that could be used as the source for sharing rules
Reference:
https://developer.salesforce.com/docs/atlas.en-us.draes.meta/draes/draes_group_membership_data_skew.htm?search_text=Data%20Skew

Thursday 1 October 2020

Visualforce Standard Controller method addFields(fieldNames)

 Hi ,

When a Visualforce page is loaded, the fields accessible to the page are based on the fields referenced in the Visualforce markup. This method adds a reference to each field specified in "fieldNames" so that the controller can explicitly access those fields as well.

Here "fieldNames" data type is List<String> .

The strings in fieldNames can either be the API name of a field, such as AccountId, or they can be explicit relationships to fields, such as something__r.myField__c.

Usage:

This method should be called before a record has been loaded—typically, it's called by the controller's constructor. If this method is called outside of the constructor, you must use the reset() method before calling addFields().

This method is only for controllers used by dynamicVisualforce bindings.

Sample Example:

public AccountController(ApexPages.StandardController stdController){

            this.controller = stdController;

           List<String> fieldNamesList = new List<String>{Type,Industry};

            stdController.addFields(fieldNamesList); 

}

Reference:

https://developer.salesforce.com/docs/atlas.en-us.pages.meta/pages/apex_ApexPages_StandardController_addFields.htm


ContentDocument and ContentDocumentLink trigger behavior in Classic and Lightning on delete trigger

Hi,

Let's have a look at how the triggers behave written on "ContentDocument and ContentDocumentLink" objects in Salesforce Classic and Salesforce Lightning.

Here we are going to discuss what is going to happen when we write a trigger for the "delete" event on the above objects.

In Classic:

ContentDocument triggers do not fire, as Salesforce only deletes the associated ContentDocumentLink record, not the ContentDocument record.

In Lightning Experience:

both the ContentDocument and related ContentDocumentLink records are deleted, and by design Salesforce only fires the trigger on ContentDocument, not the trigger on ContentDocumentLink.

This is working as designed and can be verified by following below steps:

1. Create two "before delete"  triggers: one on ContentDocument, and the other on ContenDocumentLink objects.

2. Place a "system.debug" statement in each which could be verified in the Debug logs.

3. Now upload 2 files to any Object record under 'Files' related list. Once done, you can observe both the uploaded documents under the 'Files' tab.

4. Execute the below queries in the Developer Console.

SELECT Id, LinkedEntityId, ContentDocumentId FROM ContentDocumentLink WHERE LinkedEntityId=<<Id og the document>>

2 records will be returned

SELECT Id, Title FROM ContentDocument WHERE Id=<<ContentDocumentId from the above query>>

2 rows will be returned

5. Set up the Debug logs

 IN CLASSIC: 

Delete one of the uploaded files, by clicking on the 'Del' link besides the document under 'Files' related list.

OBSERVATION:

In Debug logs you will see that only the ContentDocumentLinkTrigger has got fired and the Debug statement present in that Trigger will get displayed.

The document you have deleted will be available under the 'Files' tab

On executing the above 2 queries you will observe that only 1 row is returned for the 1st query and 2 rows for the second query. i.e. Only the ContentDocumentLink is getting removed.

 IN LIGHTNING:

Open the object record and delete the 2nd uploaded file, by clicking on the 'Del' link besides the document under 'Files' related list.

OBSERVATION:

In Debug logs you will see that only the ContentDocumentTrigger has got fired and the Debug statement present in that Trigger will get displayed.

The document you have deleted will no longer be available under the 'Files' tab

On executing the above 2 queries you will observe that no row is returned for the 1st query and 1 row for the second query (the one related to the 1st document). i.e. Both the ContentDocument and the ContentDocumentLink have got removed.

Note: 

We should remember if we are trying to write tirggers on "delete" event on these objects.

This content is from following Salesforce Link.

Reference:

https://help.salesforce.com/articleView?id=000312746&language=en_US&type=1&mode=1


How to include a screen flow in a Lightning Web Component

 Hi, Assume  you have a flow called "Quick Contact Creation" and API Name for the same is "Quick_Contact_Creation". To i...