Commit Graph

13 Commits

Author SHA1 Message Date
Mehdi Amini
308571074c Mass update the MLIR license header to mention "Part of the LLVM project"
This is an artifact from merging MLIR into LLVM, the file headers are
now aligned with the rest of the project.
2020-01-26 03:58:30 +00:00
Mehdi Amini
56222a0694 Adjust License.txt file to use the LLVM license
PiperOrigin-RevId: 286906740
2019-12-23 15:33:37 -08:00
River Riddle
2666b97314 NFC: Cleanup non-conforming usages of namespaces.
* Fixes use of anonymous namespace for static methods.
* Uses explicit qualifiers(mlir::) instead of wrapping the definition with the namespace.

PiperOrigin-RevId: 286222654
2019-12-18 10:46:48 -08:00
Kazuaki Ishizaki
8bfedb3ca5 Fix minor spelling tweaks (NFC)
Closes tensorflow/mlir#177

PiperOrigin-RevId: 275692653
2019-10-20 00:11:34 -07:00
Feng Liu
cf0a782339 Remove the constraint that min / max should stride zero
Since we apply nudging for the zero point to make sure the nudged zerop points
can be in the range of [qmin, qmax], the constraint that rmin / rmax should
stride zero isn't necessary.

This also matches the documentation of tensorflow's FakeQuantWithMinMaxArgs op,
where min and max don't need to stride zero:
https://www.tensorflow.org/api_docs/python/tf/quantization/fake_quant_with_min_max_args

PiperOrigin-RevId: 268296285
2019-09-10 13:26:46 -07:00
Feng Liu
c68d5467d6 Convert ConstFakeQuantPerAxis to qcast and dcast pair
This is also to add the test to the fakeQuantAttrsToType for per-channel fake quant.

PiperOrigin-RevId: 268260032
2019-09-10 10:50:57 -07:00
Feng Liu
27d776fa6d Convert per channel fake quant attributes to type
For per channel fake quant attributes, the returned type should be
UniformQuantizedPerAxisType. Currently, this method isn't under test because we
haven't added the quant_ConstFakeQuantPerAxis op and the convert method.

PiperOrigin-RevId: 268084017
2019-09-09 14:57:59 -07:00
Feng Liu
6de6c2c138 Add tests to verify 0.0 is quantized correctly
We should consider both signed and narrow_range cases.

PiperOrigin-RevId: 266167366
2019-08-29 10:09:22 -07:00
Feng Liu
7dd5efdf2c Fix the equality check of two floating point values
PiperOrigin-RevId: 266022088
2019-08-28 16:39:48 -07:00
Feng Liu
701266c47a Add an "is_signed" attribute to the quant_ConstFakeQuant op
Some TensorFlow simulated quantize ops such as QuantizeAndDequantizeV2Op have
attribute for the sign of the quantization, so quant_ConstFakeQuant should be
able to represent it with the new attribute is added.

The method for converting these attributes to an QuantizedType is updated to
handle this new argument.

PiperOrigin-RevId: 258810290
2019-07-19 11:39:54 -07:00
Feng Liu
a6d2223584 Support signed and unsigned quantization types
This patch added a new argument to the fakeQuantAttrsToType utility method, so
it can be used to convert min/max to quantized type with different signed
storage types.

PiperOrigin-RevId: 258382538
2019-07-16 13:45:29 -07:00
River Riddle
a4c3a6455c Move the emitError/Warning/Remark utility methods out of MLIRContext and into the mlir namespace.
Now that Locations are attributes, they have direct access to the MLIR context. This allows for simplifying error emission by removing unnecessary context lookups.

PiperOrigin-RevId: 255112791
2019-06-25 21:32:23 -07:00
Stella Laurenzo
d4dcf7de9e Move Quantization -> Dialect/QuantOps, FxpMathOps -> Dialect/FxpMathOps.
Adding the additional layer of directory was discussed offline and matches the Target/ tree. The names match the defacto convention we seem to be following where the C++ namespace is ^(.+)Ops/$ matched against the directory name.

    This is in preparation for patching the Quantizer into this tree, which would have been confusing without moving the Quantization dialect to its more proper home. It is left to others to move other dialects if desired.

    Tested:
      ninja check-mlir

--

PiperOrigin-RevId: 248171982
2019-05-20 13:41:55 -07:00