When getUserCost was transitioned to use an explicit CostKind, TCK_CodeSize was used even though the original kind was implicitly SizeAndLatency so restore this behaviour. We now only query for CodeSize when optimising for minsize. I expect this to not change anything as, I think all, targets will currently return the same value for CodeSize and SizeLatency. Indeed I see no changes in the test suite for Arm, AArch64 and X86. Differential Revision: https://reviews.llvm.org/D85829
3 lines
67 B
INI
3 lines
67 B
INI
if not 'ARM' in config.root.targets:
|
|
config.unsupported = True
|