In order to support custom dataflows you have to turn on 'Data Replication' and you want to create custom dataflows in order to allow better control over schedules, development, debugging, and insulation against dataflow failures due to changes, data, etc.
That being said you will want to disable incremental updates. Troubleshooting has several times come back to the Replication Object has stale data even though the data was changed hours or days before working the issue with replication and the dataflows running.
Some things to note:
- New Records are picked up immediately
- Changes could take an hour before getting picked up
- Replication counts against the 24 dataflow limit
This might come across as negative but I am not sure why the replication settings give you the option to run hourly if it's going to cause a limits issue. I have 5 Dataflows plus replication and can only run a refresh once a day.
Use this resource to disable replication: Extract Data Incrementally in Replication
Or basically add this to your dataflows right before the fields: "incremental": false,
In Salesforce.com Wave Analytics in the new 2.0 Wave Designer, if you find that a component is un-selectable or after selecting your browser freezes here are two things to help resolve:
- Check your XMD File
- When doing custom formatting through the XMD, adding percents and dollars specifically, you do not need to add a label and in some cases this will actually confuse the system so remove it. You can access your XMD from dataset -> edit -> download and then looking at it in a JSON viewer.
- Check your Widget title vs. measureField
- I had an issue where I added a Number field and after adding had to go to the widget section and update the measureField. This was specifically a pigql measure that had multiple columns and somehow it had picked the wrong one.
Measure me if you have your editor freeze up, I might have some more tips but generally rechecking that all the components and any XMD modifications you have made fixed my issue.
Power of the googles! Plex server stop downloading the movie posters for anything added new or old. I thought maybe it was a ploy to get people to upgrade to Premium but ended up being something easy to fix.
Check out this post and rename the folder it suggests:
This is the folder: https://forums.plex.tv/discussion/158312/missing-movie-posters
If you have gotten the following message:
Failed to parse detail: START_TAG seen ...</sf:exceptionMessage><sf:upgradeURL>... @1:752 due to: com.sforce.ws.ConnectionException: unable to find end tag at: START_TAG seen ...</sf:exceptionMessage><sf:upgradeURL> ... @1:752
Or something like that while trying to load in to Salesforce.com Data Loader, try upgrading your version. I had 35.00 and 37.00 both fail where a fresh copy of 38.00 worked.
Click the image below for fullsize:
When doing PDF's in Salesforce.com one thing that can be tricky is dynamic content caught between the page break. Signature block being a good example of something that you want kept together on the page. Using the CSS 'page-break-inside' can work 100% the first time or if you have a busy page with lots of formatting and CSS you might run in to issues especially when using 'renderas=pdf' as your HTML and CSS are rendered on the backend and not using your latest/greatest browser. At least that was what I found when my CSS worked great in Chrome and IE but bombed when doing a download/renderas=PDF.
One solution I found that worked 100% is wrapping your table in question in an apex:outputpanel like so:
<apex:outputPanel layout="block" style="page-break-inside: avoid;">
I would like to take credit for this but as usual this was a google find: https://developer.salesforce.com/forums/?id=906F0000000AjSfIAK
Microsoft Excel...it loves to format dates in a crazy way. By default it will do day dash 3 digit month name and 2 digit year like this: 13-Sep-2016. Not helpful. In the U.S. your first instinct would be to change it to 2-digit day dash 2-digit month dash 2 or 4-digit year: 10/13/2016. Salesforce.com does not like either date. Googling 'Date: invalid date' might lead you down the path of adding time to your date...don't do it! Even for Date/Time fields if you don't have a specific time already do the following...
Solution: Take your dates and do a custom format using the following: yyyy-m-d
Adding Custom Fields to a Change Set can tedious. A quick way to add a bunch of fields to a Salesforce.com outbound change set is to create a report with those fields and save it, add the report to the change set, and then click the "View/Add Dependencies" and viola!~
In my previous life/position, I was in charge of all things Technology including Desktop Support. One of the challenges of good desktop support other than of course the obvious: Great Customer Service! was of course keeping the machines patched, clean, and in good working order. In the last 10 years this was made a little easier by better imaging software, better Malware support from AntiVirus vendors but the two biggest changes that helped tamper down the constant churn was first the Web Filter. Web Filtering is necessary to keep folks on the straight and narrow and to intercept the bad stuff people are exposed to either intentionally or by chance. You don't want to be big bother but wouldn't believe the stuff this technology catches. The other strong idea was VDI or Virtual Desktops. This was made possible by the VM revolution and the idea of accessing your applications and data everywhere! Smart Phones and corporate Google Drive/Drop Box solutions drove the idea that where you kept or accessed your stuff and on what device became trivial, it was everywhere! VDI isn't for everyone but it has a lot of value both from a support, security, and usability stand point. The advances that have been made to performance really makes it hard to even tell you're on a VM.
I've been out of that game for almost 2 years now but sometimes I do see an article or point of view that aligned with my thinking and wanted to share:
Any security tech worth their salt will tell you the same thing. The network needs to be protected from the users themselves. They are the primary way bad things enter the environment. To that end you need to do several things.
1. Segment off the entire gamut of user PCs and apply the same access restriction methodology you do to the Internet feed. Use a white list approach. Yes, they can reach more services internally. No, they cannot obtain administrative access. The user in front of the PC has no bearing on the PC's access.
2. Remove the ability to administer anything directly. Create a set of 'jump' or 'hop' boxes which employ some form of two-factor authentication, from which all administrative functions originate. And this includes everything from networking gear to application administration. No PC should be able to obtain any form of administrative access to anything, anywhere.
3. Use end node segmentation. Every server and network device must have a separate, non-routable management interface. The primary IP address, the one with the configured default gateway, is the one used to provide services. The management interface has a disjoint IP address, as in it can't be derived from the schema used to create the primary addresses. It has no routing capability, as in it can't communicate outside of its configured subnet. The Hop-box through which it is managed is housed on the same subnet. Hop-boxes provide the service of 'management' to the environment and employ the same addressing and routing scheme. In this way remote, or off-site administration is accomplished through normal routing to the hop-box, not to the device's management interface.
4. Management applications use a VDI methodology housed on the hop box. This includes even SSH clients to the networking devices. They only display on the PC, they don't run in its memory space. As a best practice, all of your applications similarly run as VDI services for the same reason. The end PC becomes much closer to a 'terminal' or portal to the applications, and its memory space and CPU are used only to draw on the screen and communicate with the VDI service. There is a financial advantage as well to loading software only onto VDI servers, instead of a set of desktops. This also aids in writing the firewall rules for user PC's as the only services they need are for Internet access, and the VDI protocol itself. This is a thin-client kind of design without using actual thin client hardware.
5. Eliminate the use of local storage. This includes thumb drives but is really focused on documents. For the most part laptop hard drives are not part of any backup process, and at some point some middle manager will complain about a key spreadsheet they lost because the only copy was on their laptop hard drive that just went belly up. Avoid that. Put everything onto a file server which has access controls and a backup schedule. If you need transfer capabilities, use any number of secured file transfer methodologies. Yes you will require a network connection to access your files. No this isn't really a problem anymore, and why would you be updating your business critical spreadsheet held on a thumb drive you can lose?
Among other things this alleviates the need for draconian Internet filtering policies. Let the users browse Facebook or even dark web sites. They are treated as the security cesspool they are and they cannot achieve a secure stance no matter what is running one them.
Another thing this eliminates is the need to control local admin rights to the PC's. Let anyone load whatever software they like. Heck, let the web link load malware. It won't accomplish anything. You can keylog all you want, it won't get you any access.
The final advantage this has is more operational in nature. Given that there is nothing critical contained on the PC, then any PC will do. If one goes belly up or is compromised by malware, then simply replace it with another from spares and the user continues on their way. Mean Time To Resolution becomes the time it takes to dispatch a replacement, and the failed/corrupted device can be examined offline and without impact to the user.
I copied it here because things have a way of disappear as people come and go.
If you are using CSVFIX and running UNIQUE against a large data set you have to remember that CSVFIX loads the entire file in to memory to do the processing. So if you get this...check and watch your environments memory. Mine only had 3gig...it could use a little more for parsing 3+ million rows.
Sometimes I get SAP data that is quoted with both single quotes and double quotes. The data looks something like this:
CSVFIX will strip out the extra single quotes with the following command:
- csvfix read_dsv -s "," -csv Input_File1.csv > Output_File1.csv
Seems unneeded but otherwise your output is double quoted. I blame SAP.
The magic is the '-CSV' switch. Make this your first command and you're right as rain. I made it the second in a script I had that first stripped the first header row, see today's previous posting, and results were not as expected. Could be just a fluke as it should not have mattered.
BTW Excel calls this 'Text Qualifier" on Step 2 of the data import wizard, another way to fix or strip these out but then that eliminates automation which is the whole point of CSVFIX. Excel can do single and double but in the above example will read it correctly ad-hoc analysis and test ETL.